Yeah I would think that we would want ASI-entities to (a) have positively valenced experienced as well as the goal of advancing their positively valenced experience (and minimizing their own negatively valenced experience) and/âor (b) have the goal of advancing positive valenced experiences of other beings and minimizing negatively valenced experiences.
A lot of the discussion I hear around the importance of âgetting alignment rightâ pertains to lock-in effects regarding suboptimal futures.
Given the probable irreversibility of the fate accompanying ASI and the potential magnitude of good and bad consequences across space and time, trying to maximize the chances of positive outcomes seems simply prudent. Perhaps some of the âmessagingâ of AI safety seems to be a bit human-centered, because this might be more accessible to more people. But most who have seriously considered a post-ASI world have considered the possibility of digital minds both as moral patients (capable of valenced experience) and as stewards of value and disvalue in the universe.
I agree EAs often discuss the importance of âgetting alignment rightâ and then subtly frame this in terms of ensuring that AIs either care about consciousness or possess consciousness themselves. However, the most common explicit justification for delaying AI development is the argument that doing so increases the likelihood that AIs will be aligned with human interests. This distinction is crucial because aligning AI with human interests is not the same as ensuring that AI maximizes utilitarian valueâhuman interests and utilitarian value are not equivalent.
Currently, we lack strong empirical evidence to determine whether AIs will ultimately generate more or less value than humans from a utilitarian point of view. Because we do not yet know which is the case, there is no clear justification for defaulting to delaying AI development rather than accelerating it. If AIs turn out to generate more moral value than humans, then delaying AI would mean we are actively making a mistakeâwe would be increasing the probability of future human dominance, since by assumption, the main effect from delaying AI is to increase the probability that AIs will be aligned with human interests. This would risk entrenching a suboptimal future.
On the other hand, if AIs end up generating less value, as many effective altruists currently believe, then delaying AI would indeed be the right decision. However, since we do not yet have enough evidence to determine which scenario is correct, we should recognize this uncertainty rather than assume that delaying AI is the obviously preferable, or default course of action.
Because we face substantial uncertainty around the eventual moral value of AIs, any small reduction in p(doom) or catastrophic outcomesâincluding S-risksâcarries enormous expected utility. Even if delaying AI costs us a few extra years before reaping its benefits (whether enjoyed by humans, other organic species, or digital minds), that near-term loss pales in comparison to the potentially astronomical impact of preventing (or mitigating) disastrous futures or enabling far higher-value ones.
From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectoryâwhether for humans, other organic species, or digital minds. Consequently, itâs prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future. This would be true regardless of the substrate of consciousness.
From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectoryâwhether for humans, other organic species, or digital minds. Consequently, itâs prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future.
Your argument appears to assume that, in the absence of evidence about what goals future AI systems will have, delaying AI development should be the default position to mitigate risk. But why should we accept this assumption? Why not consider acceleration just as reasonable a default? If we lack meaningful evidence about the values AI will develop, then we have no more justification for assuming that delay is preferable than we do for assuming that acceleration is.
In fact, one could just as easily argue the opposite: that AI might develop moral values superior to those of humans. This claim appears to have about as much empirical support as the assumption that AI values will be worse. This argument could then justify accelerating AI rather than delaying it. Using the same logic that you just applied, one could make a symmetrical counterargument against your position: that accelerating AI is actually the correct course of action, since any minor harms caused by moving forward are vastly outweighed by the long-term risk of locking in suboptimal values through unnecessary delay. Delaying AI development would, in this context, risk entrenching human values, which are suboptimal to the default AI values that we would get through accelerating.
You might think that even weak evidence in favor of delaying AI is sufficient to support this strategy as the default course of action. But this would seem to assume a âknifeâs edgeâ scenario, where even a slight epistemic advantageâsuch as a 51% chance that delay is beneficial versus a 49% chance that acceleration is beneficialâshould be enough to justify committing to a pause. If we adopted this kind of reasoning in other domains, we would quickly fall into epistemic paralysis, constantly shifting strategies based on fragile, easily reversible analysis.
Given this high level of uncertainty about AIâs future trajectory, I think the best approach is to focus on the most immediate and concrete tradeoffs that we can analyze with some degree of confidence. This includes whether delaying or accelerating AI is likely to be more beneficial to the current generation of humans. However, based on the available evidence, I believe that accelerating AIârather than delaying itâis likely the better choice, as I highlight in my post.
Yeah I would think that we would want ASI-entities to (a) have positively valenced experienced as well as the goal of advancing their positively valenced experience (and minimizing their own negatively valenced experience) and/âor (b) have the goal of advancing positive valenced experiences of other beings and minimizing negatively valenced experiences.
A lot of the discussion I hear around the importance of âgetting alignment rightâ pertains to lock-in effects regarding suboptimal futures.
Given the probable irreversibility of the fate accompanying ASI and the potential magnitude of good and bad consequences across space and time, trying to maximize the chances of positive outcomes seems simply prudent. Perhaps some of the âmessagingâ of AI safety seems to be a bit human-centered, because this might be more accessible to more people. But most who have seriously considered a post-ASI world have considered the possibility of digital minds both as moral patients (capable of valenced experience) and as stewards of value and disvalue in the universe.
I agree EAs often discuss the importance of âgetting alignment rightâ and then subtly frame this in terms of ensuring that AIs either care about consciousness or possess consciousness themselves. However, the most common explicit justification for delaying AI development is the argument that doing so increases the likelihood that AIs will be aligned with human interests. This distinction is crucial because aligning AI with human interests is not the same as ensuring that AI maximizes utilitarian valueâhuman interests and utilitarian value are not equivalent.
Currently, we lack strong empirical evidence to determine whether AIs will ultimately generate more or less value than humans from a utilitarian point of view. Because we do not yet know which is the case, there is no clear justification for defaulting to delaying AI development rather than accelerating it. If AIs turn out to generate more moral value than humans, then delaying AI would mean we are actively making a mistakeâwe would be increasing the probability of future human dominance, since by assumption, the main effect from delaying AI is to increase the probability that AIs will be aligned with human interests. This would risk entrenching a suboptimal future.
On the other hand, if AIs end up generating less value, as many effective altruists currently believe, then delaying AI would indeed be the right decision. However, since we do not yet have enough evidence to determine which scenario is correct, we should recognize this uncertainty rather than assume that delaying AI is the obviously preferable, or default course of action.
Because we face substantial uncertainty around the eventual moral value of AIs, any small reduction in p(doom) or catastrophic outcomesâincluding S-risksâcarries enormous expected utility. Even if delaying AI costs us a few extra years before reaping its benefits (whether enjoyed by humans, other organic species, or digital minds), that near-term loss pales in comparison to the potentially astronomical impact of preventing (or mitigating) disastrous futures or enabling far higher-value ones.
From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectoryâwhether for humans, other organic species, or digital minds. Consequently, itâs prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future. This would be true regardless of the substrate of consciousness.
Your argument appears to assume that, in the absence of evidence about what goals future AI systems will have, delaying AI development should be the default position to mitigate risk. But why should we accept this assumption? Why not consider acceleration just as reasonable a default? If we lack meaningful evidence about the values AI will develop, then we have no more justification for assuming that delay is preferable than we do for assuming that acceleration is.
In fact, one could just as easily argue the opposite: that AI might develop moral values superior to those of humans. This claim appears to have about as much empirical support as the assumption that AI values will be worse. This argument could then justify accelerating AI rather than delaying it. Using the same logic that you just applied, one could make a symmetrical counterargument against your position: that accelerating AI is actually the correct course of action, since any minor harms caused by moving forward are vastly outweighed by the long-term risk of locking in suboptimal values through unnecessary delay. Delaying AI development would, in this context, risk entrenching human values, which are suboptimal to the default AI values that we would get through accelerating.
You might think that even weak evidence in favor of delaying AI is sufficient to support this strategy as the default course of action. But this would seem to assume a âknifeâs edgeâ scenario, where even a slight epistemic advantageâsuch as a 51% chance that delay is beneficial versus a 49% chance that acceleration is beneficialâshould be enough to justify committing to a pause. If we adopted this kind of reasoning in other domains, we would quickly fall into epistemic paralysis, constantly shifting strategies based on fragile, easily reversible analysis.
Given this high level of uncertainty about AIâs future trajectory, I think the best approach is to focus on the most immediate and concrete tradeoffs that we can analyze with some degree of confidence. This includes whether delaying or accelerating AI is likely to be more beneficial to the current generation of humans. However, based on the available evidence, I believe that accelerating AIârather than delaying itâis likely the better choice, as I highlight in my post.