Animal welfare does more to push the frontiers of moral circle expansion
mic
Harris was the one personally behind the voluntary AI safety commitments of July 2023. Here’s a press release from the White House:
The Vice President’s trip to the United Kingdom builds on her long record of leadership to confront the challenges and seize the opportunities of advanced technology. In May, she convened the CEOs of companies at the forefront of AI innovation, resulting in voluntary commitments from 15 leading AI companies to help move toward safe, secure, and transparent development of AI technology. In July, the Vice President convened consumer protection, labor, and civil rights leaders to discuss the risks related to AI and to underscore that it is a false choice to suggest America can either advance innovation or protect consumers’ rights.
As part of her visit to the United Kingdom, the Vice President is announcing the following initiatives.The United States AI Safety Institute: The Biden-Harris Administration, through the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) inside NIST. …
See also Foreign Policy’s piece Kamala Harris’s Record as the Biden Administration’s AI Czar
I’m surprised the video doesn’t mention cooperative AI and avoiding conflict among transformative AI systems, as this is (apparently) a priority of the Center on Long-Term Risk, one of the main s-risk organizations. See Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda for more details.
I wouldn’t consider factory farming to be an instance of astronomical suffering, as bad as the practice is, since I don’t think the suffering from one century of factory farming exceeds hundreds of millions of years of wild animal suffering. However, perhaps it could be an s-risk if factory farming somehow continues for a billion years. For reference, here is definition of s-risk from a talk by CLR in 2017:
“S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.”
Great to see this! One quick piece of feedback: It takes a while to see a response from the chatbot. Are you planning on streaming text responses in the future?
Thanks for your comment, Jackson! I’ve removed my post since it seems that it was too confusing. One message that I meant to convey is that the imaginary nuclear company essentially does not have any safety commitments currently in effect (“we aren’t sure yet how to operate our plant safely”) and is willing to accept any number of deaths less than <10,000 people, despite adopting this “responsible nuclear policy.”
I think another promising intervention would be to persuade God to be a conditional annihilationist or support universal reconciliation with Christ. Abraham successfully negotiated conditions with God regarding the destruction of Sodom and Gomorrah with just a few sentences. Imagine what we could do with rigorous and prayerful BOTEC analyses! Even if there is a small chance of this succeeding, the impact could be incredible in expectation.
Are there any readings about how a long reflection could be realistically and concretely achieved?
Great post! I’ve written a paper along similar lines for the SERI Conference in April 2023 here, titled “AI Alignment Is Not Enough to Make the Future Go Well.” Here is the abstract:
AI alignment is commonly explained as aligning advanced AI systems with human values. Especially when combined with the idea that AI systems aim to optimize their world based on their goals, this has led to the belief that solving the problem of AI alignment will pave the way for an excellent future. However, this common definition of AI alignment is somewhat idealistic and misleading, as the majority of alignment research for cutting-edge systems is focused on aligning AI with task preferences (training AIs to solve user-provided tasks in a helpful manner), as well as reducing the risk that the AI would have the goal of causing catastrophe.
We can conceptualize three different targets of alignment: alignment to task preferences, human values, or idealized values.
Extrapolating from the deployment of advanced systems such as GPT-4 and from studying economic incentives, we can expect AIs aligned with task preferences to be the dominant form of aligned AIs by default.
Aligning AI to task preferences will not by itself solve major problems for the long-term future. Among other problems, these include moral progress, existential security, wild animal suffering, the well-being of digital minds, risks of catastrophic conflict, and optimizing for ideal values. Additional efforts are necessary to motivate society to have the capacity and will to solve these problems.
I don’t necessarily think of humans as maximizing economic consumption, but I argue that power-seeking entities (e.g., some corporations or hegemonic governments using AIs) will have predominant influence, and these will not have altruistic goals to optimize for impartial value, by default.
Congrats on launching GWWC Local Groups! Community building infrastructure can be hard to set up, so I appreciate the work here.
It would be bad to create significant public pressure for a pause through advocacy, because this would cause relevant actors (particularly AGI labs) to spend their effort on looking good to the public, rather than doing what is actually good.
I think I can reasonably model the safety teams at AGI labs as genuinely trying to do good. But I don’t know that the AGI labs as organizations are best modeled as trying to do good, rather than optimizing for objectives like outperforming competitors, attracting investment, and advancing exciting capabilities – subject to some safety-related concerns from leadership. That said, public pressure could manifest itself in a variety of ways, some of which might work toward more or less productive goals.
I agree that conditional pauses better than unconditional pauses, due to pragmatic factors. But I worry about AGI labs specification gaming their way through dangerous-capability evaluations, using brittle band-aid fixes that don’t meaningfully contribute to safety.
I think GiveWell shouldn’t be modeled as wanting to recommend organizations that save as many current lives as possible. I think a more accurate way to model them is “GiveWell recommends organizations that are [within the Overton Window]/[have very sound data to back impact estimates] that save as many current lives as possible.”
This is correct if you look at GiveWell’s criteria for evaluating donation opportunities. GiveWell’s highly publicized claim “We search for the charities that save or improve lives the most per dollar” is somewhat misleading given that they only consider organizations with RCT-style evidence backing their effectiveness.
Upvoted. This is what longtermism is already doing (relying heavily on non-quantitative, non-objective evidence) and the approach can make sense for more standard local causes as well.
What do you think are the main reasons behind wanting to deploy your own model instead of training an API? Some reasons I can think of:
cost savings
data privacy, not wanting usage to be tracked
interpretability research (this is EleutherAI’s justification for releasing an open-source large language model)
wanting to do things that are prohibited by the API
For anyone interested, the Center for AI Safety is offering up to $500,000 in prizes for benchmark ideas: SafeBench (mlsafety.org)
Where do you draw the line between AI startups that do vs don’t contribute excessively to capabilities externalities and existential risk? I think you’re right that your particular startup wouldn’t have a significant effect of accelerating timelines. But if we’re thinking AI startups in general, this could be another OpenAI or Adept, which probably have more of an effect on timelines.
I could imagine that even if one’s startup doesn’t working on scaling and making models generally smarter, a relatively small amount of applications work to make them more useful could put them at notably greater risk of having dangerous capabilities or intent. As an example, imagine if OpenAI only made GPT-3 and never produced InstructGPT or ChatGPT. It feels a lot harder to steer GPT-3 to do useful things, so I think that there would have been noticeably less adoption of LLMs and interest in advancing their capabilities, at least for a while. (For clarification, my claim isn’t that InstructGPT and ChatGPT necessarily contributed to existential risk, but they do have capabilities externalities and I think affected timelines, in part due to the hype they generated.)
Related: Risks of space colonization (Kovic, 2020).
Just so I understand, are all four of these quotes arguing against preference utilitarianism?
I’m curious whether the reason why EA may be perceived as a cult while, e.g., environmentalist and social justice activism are not, is primarily that the concerns of EA are much less mainstream.
I appreciate the suggestions on how to make EA less cultish, and I think they are valuable to implement, but I don’t think they would have a significant effect on public perception of whether EA is a cult.
The plant-based foods industry should make low-phytoestrogen soy products.
Soy is an excellent plant-based protein. It’s also a source of the phytoestrogen isoflavone, which men online are concerned has feminizing properties (cf. soy boy). I think the effect of isoflavones is low for moderate consumption (e.g., one 3.5 oz block of tofu per day), but could be significant if the average American were to replace the majority of their meat consumption with soy-based products.
Fortunately, isoflavones in soy don’t have to be an issue. Low-isoflavone products are around, but they’re not labeled as such. I think it would be a major win for animal welfare if the plant-based foods industry could transition soy-based products to low-isoflavone and execute a successful marketing campaign to quell concerns about phytoestrogens (without denigrating higher-isoflavone soy products).
More speculatively, soy growers could breed or bioengineer soy to be low in isoflavones, like other legumes. One model for this development would be how normal lupin beans have bitter, toxic alkaloids and need days of soaking. But in the 1960s, Australian sweet lupins were bred with dramatically lower alkaloid content and are essentially ready to eat.
Isoflavone content varies dramatically depending on the processing and growing conditions. This chart from Examine shows that 100 g of tofu can have anywhere from 3 to 142 mg of isoflavones, and 100 mg soy protein isolate can have 46 to 200 mg of isoflavones.