Research assistant at Epoch
Pablo Villalobos
I’m personally still reserving judgment until the dust settles. I think in this situation, given the animosity towards SBF from customers, investors, etc, there are clear incentives to speak out loud if you believe there was fraud, and to stay quiet if you believe it was an honest (even if terrible) mistake. So we’re likely seeing biased evidence.
Still, a mistake of this magnitude seems at the very least grossly negligent. You can’t preserve both the integrity and the competence of SBF after this. And I agree that it’s hard to know whether you’re competent enough to do something, until you do it and succeed or fail. But then the lesson to learn is sonething like “be in constant vigilance, seek feedback from the people who know most about what you are trying to do”, etc.
Also, loyalty is only as good as your group is. You can’t use a loyalty argument to defend a member of your group when they become suspect of malfeasance. You might appeal to the loyalty of those who knew them best and didn’t spot any signs of bad behavior before, but that’s only a handful of people.
Well, I also think that the core argument is not really valid. Engagement does not require conceding that the other person is right.
The way I understand it, the core of the argument is that AI fears are based on taking a pseudo-trait like “intelligence” and extrapolating it to a “super” regime. The author claims that this is philosophical nonsense and thus there’s nothing to worry about. I reject that AI fears are based on those pseudo-traits.
AI risk is not in principle about intelligence or agency. A sufficient amount of brute-force search is enough to be catastrophic. An example of this is the “Outcome Pump”. But if you want a less exotic example, consider evolution. Evolution is not sentient, not intelligent, and not an agent (unless your definition of those is very broad). And yet, evolution from time to time makes human civilization stumble by coming up with deadly, contagious viruses.
Now, viruses evolve to make more copies of themselves, so it is quite unlikely that an evolved virus will kill 100% of the population. But if virus evolution didn’t have that life-preserving property, and if it happened 1000 times faster, then we would all die within months.
The analogy with AI is: suppose we spend 10^100000 FLOPs on a brute force search for industrial robot designs. We simulate the effects of different designs on the current world and pick the one whose effects are closest to out target goal. The final designs will be exceedingly good at whatever the target of the search is, including at convincing us that we should actually build the robots. Basically, the moment someone sees those designs, humanity will have lost some control over their future. In the same way that, once SARS-CoV-2 entered a single human body, the future of humanity suddenly became much more dependent on our pandemic response.
In practice we don’t have that much computational power. That’s why intelligence becomes a necessary component of this, because intelligence vastly reduces the search space. Note that this is not some “pseudo-trait” built on human psychology. This is intelligence in the sense of compression: how many bits of evidence you need to complete a search. It is a well-defined concept with clear properties.
Current AIs are not very intelligent by this measure. And maybe they’ll be. Maybe it would take some paradigm different from Deep Learning to achieve this level of intelligence. That is an empirical question that we’ll need to solve. But at no point does SIILTBness play any role in this.
Sufficiently powerful search is dangerous even if there’s nothing like it is to be a search process. And ‘powerful’ here is a measure of how many states you visit and how efficiently you do it. Evolution itself is a testament to the power of search. It is not philosophical non-sense, but the most powerful force on Earth for billions of years.
(Note: the version of AI risk I have explored here is a particularly ‘hard’ version, associated with the people who are most pessimistic about AI, notably MIRI. There are other versions that do rest on something like agency or intelligence)
The objection that I thought was valid is that current generative AIs might not be that dangerous. But the author himself acknowledges that training situated and embodied AIs could be dangerous, and it seems clear that the economic incentives to build that kind of AI are strong enough that it will happen eventually (and we are already training AIs on virtual environments such as Minecraft. Is that situated and embodied enough?)
Upvoted because I think the linked post raises an actually valid objection, even thought it does not seem devastating to me and it is kind of obscured by a lot of philosophy that also seems not that relevant to me.
There was a linkpost for this in LessWrong a few days ago, I think the discussion in the comments is good.
I quite liked this post, but just a minor quibble. Engram preservation still does not directly save lives, it gives us an indefinite amount of time, which is hopefully enough to develop the technology to actually save them.
You could say that it’s impossible to save a life since there’s always a small chance of untimely death, but let’s say we consider a life “saved” when the chance of death in unwanted conditions is below some threshold, like 10%.
I would say widespread engram preservation reduces the chance of untimely death from ~100% (assuming no longevity advances in the near future) to the probability of x-risks. Depending on the threshold, you might have to deal with x-risks to consider these lives “saved”.
Well, capital accumulation does raise productivity, so traditional pro-growth policies are not useless. But they are not enough, as you argue.
Ultimately, we need either technologies that directly raise productivity (like atomically precise manufacturing, fusion energy or other cheap energy source) or technologies that accelerate R&D and commercial adoption. Apart from AI and increasing global population, I can think of four:
boosting average intelligence via genetic engineering
reforming science and engineering, as well as education (a la dath ilan)
nootropics, BCIs, and other electrochemical methods of tinkering with the brain
systematic experimentation with social technology (having easy ways of testing ideas like open borders, UBI, Georgism, prediction markets and adopting those that work)
Announcing Epoch: A research organization investigating the road to Transformative AI
From the longtermist perspective, degrowth is not that bad as long as we are eventually able to grow again. For example, we could hypothetically halt or reverse some growth and work on creating safe AGI or nanotechnology or human enhancement or space exploration until we are able to bypass Earth’s ecological limits.
A small scale version of this happened during the pandemic, when economic activity was greatly reduced until the situation stabilized and we had better tools to fight the virus.
But let’s not be mistaken, growth (perhaps measured by something other than GDP) is pretty much the goal here. If we have to forego growth temporarily, it’s because we have failed to find clever ways of bypassing the current limits. It’s not a strategy, it’s what losing looks like.
It’s also probably politically infeasible: just raising inflation and energy prices is enough to have most people completely forget about the environment. It could not be a planned thing, rather a consequence of economic forces.
It’s like if Haber and Bosch hadn’t invented their nitrogen process in 1910. We would have run out of fertilizer and then population growth would’ve had to slow down or even reverse.
Great question. The paper does mention micronutrients but does not try to evaluate which of these advantages had a greater influence. I used the back-of-the-envelope calculation in footnote 6 as a sanity check that the effect size is plausible but I don’t know enough about nutrition to have any intuition on this.
Potatoes: A Critical Review
Even if you think all sentient life is net negative, extinction is not a wise choice. Unless you completely destroy Earth, animal life will probably evolve again, so there will be suffering in the future.
Moreover, what if there are sentient aliens somewhere? What if some form of panpsychism is true and there is consciousness embedded in most systems? What if some multiverse theory is true?
If you want to truly end suffering, your best bet would be something like creating a non sentient AGI that transforms everything into some nonsentient matter, and then spends eternity thinking and experimenting to determine if there are other universes or other pockets of suffering, and how to influence them.
Of course this would entail human extinction too, but it’s a very precise form of extinction. Even if you create an AGI, it would have to be aligned with your suffering-minimizing ethics.
So for now, even if you think life is net negative, preventing ourselves from losing control of the future is a very important instrumental goal. And anything that threatens that control, even if it’s not an existential threat, should be avoided.
I don’t think embryo selection is remotely a central example of 20th century eugenics, even if it involves ‘genetic enhancement’. No one is getting killed, sterilized or otherwise being subjected to nonconsensual treatments.
In the end, it’s no different than other non-genetic interventions to ‘improve’ the general population, like the education system. Education transforms children for life in a way that many consider socially beneficial.
Why are we okay with having such massive interventions on a child’s environment (30 hours a week for 12+ years!), but not on a child’s genes? After all, phenotype is determined by genes+environment. Why is it ok to change one but not the other?
What is morally wrong about selecting which people come to existence based on their genes, when we already make such decisions based on all other aspects of their life? There are almost no illiterate people in the western world, almost no people with stunted growth. We’ve selected them out of existence via environmental interventions. Should we stop doing that?
A valid reason to reject this new eugenics would be fearing that the eugenic selection pressure could end up being controlled by political processes, which could be dangerous. But the educational system is already controlled by political processes in most countries, and again this is mostly seen as acceptable.
Strongly agree, but I want to emphasize something. The word ‘better’ is doing a lot of work here.
I want to be replaced by my better future self, but not my future self who is great at rationalizing their decisions.
I want to be replaced by a better partner, but not by someone who is great at manipulating people into a relationship.
I want to be replaced by a better employee, but not by one who is great at getting the favor of the manager.
I want to be replaced by a machine which can do my job better, but not by an unaligned AGI.
I want to be replaced by better humans, but not by richer humans if they are lonely and depressed.
I want to be replaced by a simulation that feels like the best holiday ever, but not by a contract drafting em.
I want to be replaced, if and only if I’m being replace by something that is, in a very precise sense, better. If the process that will replace me does not share my values, then I want to replace it with one that does.
The fact that risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist for a concrete technology. So I don’t think there is any fundamental incompatibility, nor that the burden of proof is particularly high, as long as we are talking about specific classes of technology.
If technopessimism requires believing that most new technology is net harmful that’s a very different question, and probably does not even have a well defined answer.
(When I say ‘we’ I mean ‘me, if I had control over the EA community’. This is just my view, and the actual reasons behind funding decisions are probably somewhat different)
Well, I’m not sure about the numbers but I’d say a pretty substantial percentage of EA funding and donations is going to GiveWell-style global health initiatives. So it’s not like we are ignoring the plight of people right now.
The reason why there is more money that we can spend is that we don’t know a lof of effective interventions to reduce say, pandemic risk, which scale well with more money.
We could just spend all that money on interventions that might help, like trying to develop broad spectrum antivirals, but it’s legitimately a hard problem and it’s likely that we would end up with no more money to spend without having solved anything.
Going back to improving equity, the three people you mentioned (rohingya, yemeni, afghan) are victims of war and persecution. The root causes of their suffering are political. We could spend hundreds of billions trying to improve their political system so that this does not happen again, but Afghanistan itself is an example of just how hard that is.
In short, even though helping people now is very valuable, we also don’t know a lot of interventions that scale well with money. Malaria nets and deworming are the exception, not the rule. Remember that the entire world has been trying to eliminate poverty for centuries. It’s just a hard problem.
Maybe paying for vaccines in lower income countries is an effective and scalable intervention. The right way to evaluate this is with a cost-benefit analysis, not by how much money the WHO says it needs.
Turning the United Nations into a Decentralized Autonomous Organization
The UN is now running on ancient technology[source], is extremely centralized[source] and uses outdated voting methods and consensus rules[source]. This results in a slow, inefficient organization, vulnerable to regulatory capture and with messed up incentives.
Fortunately, we now have much better alternatives: Decentralized Autonomous Organizations (DAOs) are blockchain-based organizations which run on smart contracts. They offer many benefits compared to legacy technology:
1. Since the blockchain is always online and permanent, they are always available, fast, and 100% transparent by design.
2. They are decentralized and invulnerable to any attacks:
The blockchain-based DAO system works in a fully decentralized way and is immune to both outside and inside attacks. At the same time, operations of such system is only controlled by pre-defined rules; thus, the uncertainty and errors caused by human processes are greatly reduced.
[source]
3. The rules are enforced by code, so they are unbreakable.
When a government’s powers are encoded on a blockchain, its limitations will not be mere redress in a court of law, but will be the code itself. The inherent capabilities of blockchain technology can ex ante prevent a government from acting ultra vires; it can prevent government over-reach before the government act occurs.
[source]
4. They support new forms of governance and voting, such as futarchy or quadratic voting [source].
5. Since everything runs on ethereum, and cryptocurrencies always go up, a small investment in Ether now could provide enough funds to run the UN forever, freeing states from having to contribute funds [source].
Given the ample benefits, I’m sure a quick email to UN Secretary General António Guterres will convince everyone to switch to DAOs. Thus, we only need a small team of developers to write the code, which should take maybe a couple of months.
What is the expected impact? The UN recently prohibited nuclear weapons[source], contributing to reduce nuclear risk. An improvement in UN efficiency and capabilities is likely to lead to reduced existential risk, via better global coordination on issues like AI Safety.
Note that the savings from reduced operating costs will be much greater than the implementation cost, so this could even be a profitable intervention.
Hi, thank you for your post, and I’m sorry to hear about your (and others’) bad experience in EA. However, I think if your experience in EA has mostly been in the Bay Area, you might have an unrepresentative perspective on EA as a whole. Most of the worst incidents of the type you mention I’ve heard about in EA have taken place in the Bay Area, I’m not sure why.
I’ve mostly been involved in the Western European and Spanish-speaking EA communities, and as far as I know there have been much less incidents here. Of course, this might just be because these communities are smaller, or I might just not have heard of incidents which have taken place. Maybe it’s my perspective that’s unrepresentative.
In any case, if you haven’t tried it yet, consider spending more time in other EA communities.