I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
titotal
As I said, I don’t think your statement was wrong, but I want to give people a more accurate perception as to how AI is currently affecting scientific progress: it’s very useful, but only in niches which align nicely with the strengths of neural networks. I do not think similar AI would produce similarly impressive results in what my team is doing, because we already have more ideas than we have the time and resources to execute on.
I can’t really assess how much speedup we could get from a superintelligence, because superintelligences don’t exist yet and may never exist. I do think that 3xing research output with AI in science is an easier proposition than building digital super-einstein, so I expect to see the former before the latter.
I found this article well written, although of course I don’t agree that AGI by 2030 is likely. I am roughly in agreement with this post by an AI expert responding to the other (less good) short- timeline article going around.
I thought instead of critiquing the parts that I’m not an expert in, I might take a look at the part of this post that intersects with my field, when you mention material science discovery, and pour just a little bit of cold water on it.
A recent study found that an AI tool made top materials science researchers 80% faster at finding novel materials, and I expect many more results like this once scientists have adapted AI to solve specific problems, for instance by training on genetic or cosmological data.
So, an important thing to note is that this was not an LLM (neither was alphafold), but a specially designed deep learning model for generating candidate material structures. I covered a bit about them in my last article, this is a nice bit of evidence for their usefulness. The possibility space for new materials is ginormous and humans are not that good at generating new ones: the paper showed that this tool boosted productivity by making that process significantly easier. I don’t like how the paper described this as “idea generation”: it evokes the idea that the AI is making it’s own newtonian flashes of scientific insight, but actually it’s just mass generating candidate materials that an experienced professional can sift through.
I think your quoted statement is technically true, but it’s worth mentioning that the 80% faster figure was just for the people previously in the top decile of performance (ie the best researchers), for people who were not performing well there was not evidence of a real difference. In practice the effect of the tool on progress was less than this: it was plausibly attributed to increasing the number of new patents at a firm by roughly 40%, and increasing the number of actual prototypes by 20%. You can also see that the productivity is not continuing to increase: they got their boost from the improved generation pipeline, and now the bottleneck is somewhere else.
To be clear, this is still great, and a clear deep learning success story, but it’s not really in line with colonizing the mars in 2035 or whatever the ASI people are saying now.
In general, I’m not a fan of the paper, and it really could have benefited from some input from an actual material scientist.
I think if you surveyed any expert on LLMs and asked them “which was a greater jump in capabilities, Gpt2 to GPT3 or GPT3 to GPT4?” the vast majority would say the former, and I would agree with them. This graph doesn’t capture that, which makes me cautious about overelying on it.
I feel like this should be caveated with a “long timelines have gotten short… within people the author knows about in tech circles”.
I mean, just two months ago someone asked a room full of cutting edge computational physicists whether their job could be replaced by an AI soon, and the response was audible laughter and a reply of “not in our lifetimes”.
On one side you could say that this discrepancy is because the computational physicists aren’t as familiar with state of the art genAI, but on the flipside, you could point out that tech circles aren’t familiar with state of the art physics, and are seriously underestimating the scale of task ahead of them.
I’d be worried about getting sucked into semantics here. I think it’s reasonable to say that it passes the original turing test, described by Turing in 1950:
I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
I think given the restrictions of an “average interrogator” and “five minutes of questioning”, this prediction has been achieved, albeit a quarter of a century later than he predicted. This obviously doesn’t prove that the AI can think or substitute for complex business tasks (it can’t), but it does have implications for things like AI-spambots.
The method in the case of quantum physics was to meet their extraordinary claims with extraordinary evidence. Einstein did not resist the findings of quantum mechanics, only their interpretations, holding out hope that he could make a hidden variable theory work. Quantum mechanics become accepted because they were able to back up their theories with experimental data that could be explained in no other way.
Like a good scientist, I’m willing to follow logic and evidence to their logical conclusions. But when I actually look at the “logic” that is being used to justify doomerist conclusions, it always seems incredibly weak (and I have looked, extensively). I think people are rejecting your arguments not because you are a rogue outsider, but because they don’t think your arguments are very good.
I feel like the counterpoint here is that R&D is incredibly hard. In regular development, you have established methods of how to do things, established benchmarks of when things are going well, and a long period of testing to discover errors, flaws, and mistakes through trial and error.
In R&D, you’re trying to do things that nobody has ever done before, and simultaneously establish methods, benchmarks, and errors for that new method, which carries a ton of potential pitfalls. Also, nobody has ever done it before, so the AI is always inherently out-of-training to a much greater degree than in regular work.
I did read your scenario. I’m guessing you didn’t read my articles? I’m closely tracking the use of AI in material science, and the technical barriers to things like nanotechnology.
“AI” is not a magic word that makes technical advancements appear out of nowhere. There are fundamental physical limits to what you can realistically model with finite computer resources, and the technical hurdles to drexlerian nanotech are absurd in their difficulty. To make experimental advances in something like nanotech, you need extensive experimentation. The AI does not have nanotech to build those labs, and it takes more than a year for humans to build it.
I usually try to avoid the word “impossible” when talking about speculative scenarios… but by giving it a 1 year time limit, the scenario you have written is impossible.
I work in computational material science and have spent a lot of time digging into drexlerian nanotech. The idea that drexler style nanomachines can be invented in 2026 is straight up absurd. Progress towards nanomachines has stalled out for decades. This is not a “20 years from now” type project, absent transformative AI speedups the tech could be a century away, or even straight up impossible. And the effect of AI on material science is far from transformative at present, this is not going to change in 1 year.
You are not doing your cause a service by proposing scenarios that are essentially impossible.
I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn’t already
To back this up: I mostly peruse non-rationalist, left leaning communities, and this is a concern in almost every one of them. There is a huge amount of concern and distrust of AI companies on the left.
Even AI skeptical people are concerned about this: AI that is not “transformative” can concentrate power. Most lefties think that AI art is shit, but they are still concerned that it will cost people jobs: this is not a contradiction as taking jobs does not mean AI needs to better than you, just cheaper. And if AI does massively improve, this is going to make them more likely to oppose it, not less.
The gini coefficient “is more sensitive to changes around the middle of the distribution than to the top and the bottom”. When you are talking about the top billionaires, like Ozzie is, it’s not the correct metric to use:
In absolute terms, the income share of the top 1% in the US has been steadily rising since the 1980′s (although this is not true for countries like japan or sweden)
I’m not sure the “passive” finding should be that reassuring.
I’m imagining someone googling “ethical career” 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now. If they have no interest in AI based careers (either through interest or skillset), they’ll just move on to somewhere else. Maybe they would have been a really good fit for an animal advocacy org, but if their first impressions don’t tell them that animal advocacy is still a large part of EA they aren’t gonna know.
It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over. In hypothetical future 80k, where malaria nets are de-emphasised, that person may bounce off the movement instantly.
Remember that this is graphing the length of task that the AI can do with an over 50% success rate. The length of task that an AI can do reliably is much shorter than what is shown here (you can look at figure 4 in the paper): for an 80% success rate it’s 30 seconds to a minute.
Being able to do a months work of work at a 50% success rate would be very useful and productivity boosting, of course, but it would really be close to recursive self improvement? I don’t think so. I feel that some part of complex projects needs reliable code, and that will always be a bottleneck.
Welcome to the forum. You are not missing anything: in fact you have hit upon some of the most important and controversial questions about the EA movement, and there is wide disagreement on many of them, both within EA and with EA’s various critics. I can try and give both internal and external sources asking or rebutting similar questions.
In regards to the issue of unintended consequences from global aid, and the global vs local issue. this was an issue raised by Leif Wenar in a hostile critique of EA here. You can read some responses and rebuttals to this piece here and here.
With regards to the merits of Longtermism, this will be a theme of the debate week this coming week, so you should be able to get a feel for the debate within EA there. Plenty of EA’s are not longtermist for exactly the reasons you described. Longtermism the focus of a lot of external critique of EA as well, with some seeing it as a dangerous ideology, although that author has themselves been exposed for dishonest behaviour.
AI safety is a highly speculative subject, and their are a wide variety of views on how powerful AI can be, how soon “AGI” could arrive, how dangerous it is likely to be, and what the best strategy is for dealing with it. To get a feel for the viewpoints, you could try searching for “P doom”, which is a rough estimate for the chance of destruction. I might as well plug my own argument for why I don’t think it’s that likely. For external critics, pivot to AI is a newsletter that compiles articles with the perspective that AI is overhyped and that AI safety isn’t real.
The case for “earning to give” is given in detail here. The argument you raise of working for unethical companies is one of the most common objections to the practice, particularly in the wake of the SBF scandal, however in general EA discourages ETG with jobs that are directly harmful.
“The bell curve” was pilloried by the wider scientific community, and for good reason. I recommend watching this long youtube video summarizing the scientific rebuttals.
As for genetic engineering, I don’t see how you can separate it from ethical implications. As far as I can tell, every time humanity has believed that a group of people was genetically inferior, it has resulted in atrocities against that group of people. Perhaps you can get something working by specifically limiting yourself to preventing diseases and so on, but in general, I don’t think society has the ability to handle having actual “superbabies”.
Again, I’m not sure exactly how to respond to comments like this. Like, yeah, if AI could reliably do everything a top researcher does, it could enable a lot of breakthroughs. But I don’t believe that an AI will be able to do that anytime soon. All I can say is that there is a massive gap between current AI capabilities and what they would need to fully automate a material science job. 30 years sounds like a long time, but AI winters have lasted that long before: there’s no guarantee that because AI has rapidly advanced recently that it will not stall out at some point.
I will say that I just disagree that an AI could suddenly go from “no major effect on research productivity” to “automate everything” in the span of a few years. The scale of difficulty of the latter compared to the former is just too massive, and in all new technologies it takes a lot of time to experiment and figure out how to use it effectively. Ai researchers have done a lot of work to figure out how to optimise and get good at the current paradigm: but by definition, the next paradigm will be different, and will require different things to optimize.
Hey, thanks for weighing in, those seem like interesting papers and I’ll give them a read through.
To be clear, I have very little experience in quantum computing, and haven’t looked into it that much and so I don’t feel qualified to comment on it myself (hence why this was just an aside there). All I am doing is relaying the views of prominent professors in my field, who feel very strongly that it is overhyped and were willing to say so in the panel, although I do not recall them giving much detail on why they felt that way. This matches with the general views I’ve had with other physicists in casual conversations. If I had to guess the source of these views, I’d say it was skepticism of the ability to actually build such large scale fault-tolerant systems.
Obviously this is not strong evidence and should not be taken as such.
From my (small) experience in climate activist groups, I think this is an excellent article.
Some other points in favour:
Organising for small, early wins allows your organisation to gain experience with how to win, and what to do with said wins. A localised climate campaign will help you understand which messages resonate with people and which are duds, and familiarise yourself with how to deal with media, government, etc.
It’s also helps to scale with your numbers: a few hundred people aren’t going to be enough to stop billion dollar juggernauts, but they can cause local councils to feel the heat.
One counterpoint: you shouldn’t be so unambitious that people feel like you’re wasting their time. If just stop oil had started with a campaign to put flower gardens outside public libraries, they wouldn’t have attracted the committed activist base they needed.
If you look at the previous threads you posted, you’ll see I was a strong defender of giving your project a chance. I think grassroots outreach and support in areas like yours is a very good thing, and I’m glad to see you transparently report on your progress with the project.
That being said, I have to agree with the others here that investing in crypto coins like the one you mentioned is generally a bad idea. I have not heard of either of the people you claim are backing the project. The statement that “most people believe Jelly will soon be the new tiktok in the west” is not at all true. I live in the west and I guarantee you that almost nobody has ever heard of this project, and there has not been significant buzz around crypto projects in the west for a good couple of years now.
If you are skeptical, I recommend you go onto reddit and ask people in non-crypto spaces if they have heard of Jelly or are excited about the idea.
People can make money off crypto: but for the average user it’s more or less a casino, where the odds are not in your favour.
I apologise if this comes off as overly critical, but I have heard of a lot of people who have fallen victims to scammers and scoundrels in the crypto space, and I don’t want you to be one of them.
I am having trouble understanding why AI safety people are even trying to convince the general public that timelines are short.
If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
Also, if you make a bold prediction about short timelines and turn out to be wrong, won’t people stop taking you seriously the next time around?