I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
titotal
I assume they saw it at low karma. The first internet archive snapshot of this page had it at −4 karma.
I don’t think it’s “politically wise” to be associated with someone like Musk who is increasingly despised worldwide, especially among the educated, intelligent population that is EA’s primary recruitment ground. This goes quintuple for agreed upon racists like Hanania.
Elon has directly attacked every value I hold dear, and has directly screwed over life-saving aid to the third world. He is an enemy of effective altruist principles, and I don’t think we should be ashamed to loudly and openly say so.
A) there is no concrete proof that ASI is actually on the near-term horizon.
B) There is no concrete proof that if “uncontrolled” ASI is made, it is certain to kill us.
C) There is no concrete proof that the US and china will be equally bad if obtaining ASI. We have limited information as to what each country will look like decades in the future.
Many outlets don’t take the possibility of rapid AI development seriously, treating AGI discussions as mere marketing hype.
I think it would be a huge mistake to condition support for AI journalism on object level views like this. Being skeptical of rapid AI development is a perfectly valid opinion to have: and I think it’s pretty easy to make a case that the actions of some AI leaders don’t align with their words. Both of the articles you linked seem perfectly fine and provide evidence for their views: you just disagree with the conclusions of the authors.
If you want journalism to be accurate, you can’t prematurely cut off the skeptical view from the conversation. And I think skeptical blogs like Pivot-to-AI do a good job at compiling examples of failures, harms, and misdeployments of AI systems: if you want to build a coalition against harms from AI, excluding skeptics is a foolish thing to do.
I have not seen a lot of evidence that EA skills are very transferable to the realm of politics. As counterexamples, look at the botched Altman ouster, or the fact that AI safety people ended up helping start an AI arms race: these partially seem to come from a place of poor political instincts. EA is also disproportionately STEM background, which are generally considered comparatively poor at people skills (accurately, in my experience).
I think combating authoritarianism is important, but EA would probably be better off identifying other people who are good at politics and sending support their way.
One of the alleged Zizian murderers has released a statement from prison, and it’s a direct plea for Eliezer Yudkowsky specifically to become a vegan.
This case is getting a lot of press attention and will likely spawn a lot of further attention in the form of true crime, etc. The effect of this will be likely to cement Rationalism in the public imagination as a group of crazy people (regardless of whether the group in general opposes extremism), and groups and individuals connected to rationalism, including EA, will be reputationally damaged by association.
To make every day count, U3 runs many of its tests in simulation. U3 starts with a basic molecular simulator, implementing optimizations derived from a huge quantity of mathematical analysis. Then, U3 simulates small molecular systems, recording the results to “compress” the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it simulates, continuously distilling results into ever more efficient ML models. This is a compute intensive process, but thanks to U3’s growing control over AI data centers, U3 manipulates billions of dollars of compute.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as intuitive to it as wrenches and bolts are to a car mechanic.
Computational atomic physicist here: you are vastly, vastly, underestimating the difficulty of molecular simulations. Keep in mind that exactly solving the electronic structure of a couple dozen atoms would take the lifetime of the universe to complete. We have approximations that can get us in the right ballpark of the answer in reasonable time, but never to exactly precise answers. See here for a more indepth discussion.
Our community has been discussing and attempting machine learning applications since the 90′s, and only one has seen any breakthrough in actual practical use: machine learned force potentials, which involve training on other simulation data, so it’s inherently limited to the accuracy of the underlying simulation method. That allows you to do some physics simulations over a longer timescale, and by longer I mean a few nanoseconds, on perfect systems. There are some promising other ML avenues, but none of them seem likely to yield miracles. Computational simulations are an aid to experiment, not a replacement.
I get that this is meant to be some magic super-AI, but I don’t actually see that changing that much. There’s cold hard math boundaries here, and the AI can’t spend its entire computational budget trying to make moderate improvements in computational simulation physics.
We already have a gravity powered method of of electricity generation. It’s called “hydro-power”.
I suggest you spend way less time complaining about forms of energy that provably generate excess electricity, and more time explaining why you expect your device to actually work. The electrical energy you output has to come from somewhere. Where?
To be clear, I think your project is 100% doomed to fail, I’m just trying to be nice here.
With the “killing humans through machines” option, a superintelligent AI would probably be smart enough to kill us all without taking the time to build a robot army, which would definitely raise my suspicions! Maybe it would hack nuclear weapons and blow us all up, invent and release an airborne super-toxin, or make a self-replicating nanobot—wouldn’t see it coming, over as soon as we realised it wasn’t aligned.
Drexlerian style nanotech is not a threat for the foreseeable future. It is not on the horizon in any meaningful sense, and may in fact be impossible. Intelligence, even superintelligence, is not magic, and cannot just reinvent a better design than DNA, from scratch, with no testing or development. If drexlerian nanotech becomes a threat, it will be very obvious.
Also, “hacking nuclear weapons”? Do you understand the actual procedure involved in firing a nuclear weapon?
I think a lot of the critiques are pretty accurate. It seems pretty clear to me that the AI safety movement has managed to achieve the exact opposite of it’s goals, sparking an AI arms race that the west isn’t even that far ahead on, with the lead AI companies run by less than reliable characters. A lot of this was helped by poor decisions and incompetence from AI safety leaders, such as the terribly executed attempt to oust Altman.
I also agree that the plans of the yudkowskian style doomers are laughably unlikely anytime soon. However, I don’t agree that slowing down AI progress has no merit: If AI is genuinely dangerous, there is likely to be a litany of warning signs that do damage but does not wipe out humanity. With slower development, there is more time to respond appropriately, fix mistakes in AI control approaches, etc, so we can gradually learn to adapt to the effects of the technology.
So, I clearly agree with you that cutting PEPFAR is an atrocity and that saving lives is good even if it doesn’t result in structural changes to society.
However, I think the arguments in this essay are resorting almost to a strawman position of “root causes”, and it might result in actual good objections being dismissed. You should absolutely sometimes address root causes!
For an example, imagine a cholera outbreak caused by a contaminated well. In order to help, Person A might say “I’m going to hire new doctors and order new supplies to help the cholera victims”. Person B then says “that isn’t addressing the root causes of the problem, we should instead use that money to try and find and replace the contaminated well”.
Person B could easily have a point here: If they succeed, they end the cholera output entirely, whereas person A would have to keep pumping money in indefinitely, which would probably cost way more over time.
When people talk about “structural change”, they are implictly making this sort of argument: that the non-structural people will have to keep pouring money at the problem, whereas with structural reform the problem could be ended or severely curtailed on a much more permanent basis, so the latter is a better use of our time and resources than the former.
Often this argument is wrong, or deployed in bad faith. Often there is no clear path to structural reform, and the effectiveness might be overstated. However sometimes it is correct, and the structural reform really is the correct solution. For example, the abolitionism of slavery. I don’t want to throw the baby out with the bathwater here.
It is unusual for a community as small as rationalism to have produced multiple instances of cult-like groups. And while this particular group is technically opposed to mainstream rationalism, they are still knee deep in rationalist epistemology (justifying extreme acts with “timeless decision theory” and so on). Something about the epistemics or community of rationalism is probably making these types of incidences more likely.
As long as EA is associated with rationalism, expect to continue getting second order splashback from these kind of incidences.
Summary bot already exists, and it looks like it can be summoned with a simple tag? I’m not sure what more you need here.
This looks great! I think organizations outside of the typical EA hotspots are very important.
Is this the outcome of the 28 thousand dollar grant detailed here?
To summarise, six years ago you recieved a 28 thousand dollar grant, awarded people a bunch of copies of harry potter fanfiction that was available online for free and was only tangentially related to EA, and then never actually followed up on any of the people you sent the book to?
This does not look like a cost effective use of grant money. I assume the vast majority of the recipients either didn’t read it, or read it for the amusement without caring about the underlying message, which was not very heavily sold.
I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that
AI will be a revolutionary technology that affects nearly every aspect of society.
Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised.
I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust.
For the record, while I don’t think your original post was great, I agree with you on all three points here. I don’t think you’re the only one noticing a lack of engagement on this forum, which seems to only get active whenever EA’s latest scandal drops.
I think there’s an inherent limitation to the number of conservatives that EA can appeal to, because the fundamental values of EA are strongly in the liberal tradition. For example, if you believe the five foundations theory of moral values (which I think has at least a grain of truth to it), conservatives value tradition, authority and purity far more than liberals or leftists do: and in EA these values are (correctly, imo) not included as specific endgoals. An EA and a conservative might still end up agreeing on preserving certain traditions, but the EA will be doing so as a means to an end of increasing the general happiness of the population, not as a goal in of itself.
Even if you’re skeptical of these models of values, you can just look at a bunch of cultural factors that would be offputting to the run-of the mill conservative: EA is respectful of LGBT people including respecting transgender individuals and their pronouns, they have a large population of vegans and vegetarians, they say you should care about far off Africans just as much as your own neighbours.
As a result of this, when EA and adjacent groups tries to be welcoming to conservatives, they don’t end up getting your trump-voting uncle: they get unusual conservatives, such as mencius moldbug and the obsessive race-IQ people (the manifest conference had a ton of these). These are a small group of people and are by no means the majority, but even their presence in the general vicinity of EA is enough to disgust and deter many people from the movement.
This puts EA in the worst of both worlds politically: the group of people that are comfortable with tolerating both trans people and scientific racists is miniscule, and it seriously hampers the ability to expand beyond the Sam Harris demographic. I think a better plan is to not compromise on progressive values, but be welcoming to differences on the economic front.
I’d say a big problem with trying to make the forum a community space is that it’s just not a lot of fun to post here. The forum has a dry and serious tone and voice that emulates that of academic papers, which communicates that this is a place for posting Serious and Important articles, while attempts at levity or informality often get downvoted, and god forbid you don’t write in perfect grammatically correct English. Sometimes when I’m posting here I feel a pressure to act like a robot, which is not exactly conducive to community bonding.
Epistemologically speaking, it’s just not a good idea to have opinions relying on the conclusions of a single organization, no matter how trustworthy it is.
EA in general does not have very strong mechanisms for incentivising fact-checking: the use of independent evaluators seems like a good idea.