You should correct me if I’m wrong, but it seems to me that the proposals were eventually weakened to the point that conservation of resources became the primary (perhaps sole?) focus.
“Primary focus” seems correct. The resulting legal texts didn’t mention plant-based food anymore, if I recall correctly, but still led to a reduction in meat consumption/portions, so in that sense they were still somewhat successful.
As I already mentioned via email, I think this is an excellent post.
I just noticed that I overlooked one point when giving feedback: The main idea behind Sentience Politics’ “sustainable nutrition” initiatives was also to promote animal welfare and expand the moral circle (through reducing meat consumption). The environmental benefits are also significant, but weren’t the primary motivation.
Thanks! I think I don’t have the capacity to give detailed public replies to this right now. My respective short answers would be something like “sure, that seems fine” and “might inspire riskier content, depends a lot on the framing and context”, but there’s nuance to this that’s hard to convey in half a sentence. If you would like to write something about these topics and are interested in my perspective, feel free to get in touch and I’m happy to share my thoughts!
Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?
Yes, see here. Though we also put some credence on other “unknown unknowns” that we might prevent through broad interventions (like promoting compassion and cooperation).
Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?
My guess is that the latter is much more limited in potential scale.
Both could be concerning. I find it hard to think about future technological capabilities and agents in sufficient detail. So rather than thinking about specific scenarios, we’d like to reduce s-risks through (hopefully) more robust levers such as making the future less multipolar and differentially researching peaceful bargaining mechanisms.
Thanks for giving input on this!
So you seem to think that our guidelines ask people to weaken their views while Nick’s may not be doing that, and that they may be harmful to suffering-focused views if we think promoting SFE is important. I think my perspective differs in the following ways:
The guidelines are fairly similar in their recommendation to mention moral uncertainty and arguments that are especially important to other parts of the community while representing one’s own views honestly.
If we want to promote SFE in EA, we will be more convincing for (potential) EAs if we provide nuanced and balanced arguments, which is what the guidelines ask for, and if s-risks research is more fleshed out and established in the community. Unlike our previous SFE content, our recent efforts (e.g., workshops, asking for feedback on early drafts) received a lot of engagement from both newer and long-time EA community members. (Outside of EA, this seems less clear.)
We sought feedback on these guidelines from community members and received largely positive feedback. Some people will always disagree but overall, most people were in favor. We’ll seek out feedback again when we revisit the guidelines.
I think this new form of cooperation across the community is worth trying and improving on. It may not be perfect yet, but we will reassess at the end of this year and make adjustments (or discontinue the guidelines in a worst case).
I hope this is helpful. We have now published the guidelines, you can find the links above!
Fully agreed, thanks for the clarification!
They already have a committee allocating the grants which includes some academics, and they said they want to further improve the award practice. We have suggested specific academics they could work with. I’m not sure what it will end up looking like in practice. There are certainly some people in the administration who are eager to preserve the status quo, whereas others seemed quite excited about effectiveness improvements.
I don’t think it’s possible for citizens to sue the government for failing to implement a ballot initiative (or at least that’s very uncommon). But there are many indirect ways to enforce an initiative, e.g., we could talk to the members of the city council who we know and work with them to submit motions to improve the implementation of the initiative. In general, referenda are taken very seriously in Switzerland.
As I wrote above, the bottleneck is likely EA-aligned people with development knowledge wanting to spend a couple of hours per year on this (rather than formal ways of suing/filing complaints if it’s not implemented in the way we’d like). I think even a few small, friendly nudges would go a long way.
so much so that it could flip the sign of your assessment
That sounds like you think it might have been net negative, but I don’t see how that follows from your points. Unless you think the entire budget has literally a zero impact, which I think is very unlikely for the following reason:
I think it’s likely to have a significant positive impact if citizens of a city with a nominal per-capita GDP of $180,000 (source) give more money to people in developing countries (with a per-capita GDP which is ~2 orders of magnitude lower), even if that happens inefficiently. (There’s a lot of EA and non-EA writing on the indirect effects of foreign aid, etc. so I’m not going to elaborate more on that here.)
Right, a non-consequentialist analysis might lead to different conclusions in this case. Thanks for pointing that out!
I think there’s still a pretty strong case to be made that in the case of development cooperation, it’s not quite as straightforward because developed countries have harmed developing countries in many ways (colonialism, tax havens, agricultural export subsidies, etc.). Thomas Pogge has argued along these lines IIRC, so one could look at his views on this.
More generally, we live in a highly globalized world, we routinely interact with these countries through trade, etc., such that it seems plausible that we do have some responsibilities towards them. And we’re talking about one of the very wealthiest cities in the world (with a per-capita GDP of $180,000!) giving a relatively small additional amount. So if there is one particular case where Huemer’s arguments appear particularly implausible, it is probably this one.
Overall, I don’t think it’s obvious whether the case for development cooperation becomes weaker or stronger if we take into account various non-consequentialist perspectives.
Thanks for the input!
Because modeling this involves several judgment calls and would make the analysis much more complex (and harder to understand), we decided it’s better not to include it in the quantitative model and instead just mention it in the text.
I also think this is unlikely to change the numbers by more than 10%. I think it would take several fairly strong assumptions to change that, such as you think that Zurich’s marginal budget is used effectively in an important cause area such as global catastrophic risk research funding.
Some brainstorming ideas for how to model this cost:
You could model a tax increase as a reduction in income for Zurich residents (using data on per-capita GDP in the city of Zurich, this is available) and compare that to an increase in the income for the average development cooperation recipient (taking into account that some funding is used for Swiss development cooperation staff compensation). The line of reasoning from this article (also linked above) could be helpful to then translate this into welfare changes.
You could try to better understand spending cuts by looking at the budget items and which ones tended to be cut during past cuts, then try to estimate how they compare to development cooperation (or the things Zurich residents usually spend money on).
To further clarify: I think in many circumstances (e.g., for a ballot initiative in Switzerland on the federal level), public opinion polling would be crucial. But for this specific type of city-level initiative, I don’t think it would help much.
That’s correct. The original proposals for sustainable nutrition explicitly mentioned “plant-based” and “animal-friendly” food, but then the counterproposals only said “sustainable” or “environmentally friendly.” So I’d say overall, from an animal welfare perspective, they were moderately successful. We didn’t have the time to evaluate their actual impact, though I think this would be a worthwhile project for EAs, especially if it results in an EA Forum article similar to this one.
I agree with the importance of “choosing the right avenue.” I still don’t think public opinion polling is very useful for that purpose (especially if some polling data is already available). In fact, I think public opinion polling would have been unlikely to clearly identify the key issues because the general public has much less pronounced and well-informed opinions than politicians and other stakeholders.
At least for Swiss initiatives, getting reactions/opinions from the responsible legislative body and the people they trust (like local charities in this case) seems much more useful because it shapes the legislative bodies’ official recommendation to voters. I think it was a mistake not to do more of that type of stakeholder engagement in the early stages of the initiative, and that mistake almost led to a complete failure of the initiative.
Also noteworthy: Talking to local politicians is much cheaper still than doing public opinion polls (costs a couple of hours rather than thousands of dollars plus a lot of work to get the polling right).
That said, I think doing some polling before launching an initiative could also be somewhat helpful.
New EA Forum post is out: EAF’s ballot initiative doubled Zurich’s development aid.
EAF also launched the following ballot initiatives (through its now spun-off project Sentience Politics):
Ban on factory farming (see also this), federal initiative in Switzerland, signatures collected, vote expected in ~2023
Basic rights for primates (see also this), Canton of Basel-City, signatures collected, initiative first deemed invalid then valid upon appeal, not exactly sure what the status is but I think the vote is expected in ~2021
Sustainable nutrition, Lucerne, counterproposal passed (60% in favor) at the ballot in September 2018
Sustainable nutrition, Basel, rejected at the ballot (67% against) on 4 March 2018
Sustainable nutrition, Zurich, counterproposal passed (60% in favor) at the ballot in November 2017
Sustainable nutrition, Berlin Kreuzberg/Friedrichshain, implemented in a weakened form by the city without a vote in ~2018
Edit: Note that the main idea behind “sustainable nutrition” initiatives was to reduce meat consumption and promote veganism and animal welfare. There are also significant environmental benefits, but those weren’t the main reason for launching the initiatives.
Stock market returns are larger than the economic growth rate, so it could still work? In fact, that could even speak in favor of investing?
Some quick thoughts (most are probably obvious):
Some reasons to think we shouldn’t invest more than we currently are:
The highest-return investment opportunities may be non-financial, such as 80K, CEA, Founders Pledge, etc. Investments in the stability of the EA community can also be seen as a form of investment. This also means that we might naïvely underestimate the EA community’s current investment rate.
Most of the EA billionaires’ funding is currently being invested.
A lot of EAs are currently early in their careers and thus “investing” in their careers, with the largest payoffs to occur many years in the future.
It could be worth setting this up partly as “Open Phil insurance,” i.e., this fund could fund EA organizations and Open Phil’s most effective longtermist grants in the event that Open Phil funding dries up (e.g., Good Ventures stops collaborating with Open Phil for some reason).
To attract more funding, it could be worth setting this up in such a way that the donors have the option of retaining some amount of discretion. E.g., the donors may not fully agree with the worldview of the fund managers, and for this reason, there is currently a number of longtermist giving opportunities (specific organizations, the donor lottery, the Long-Term Future Fund, the Survival and Flourishing Fund, the EAF Fund (which is focused on s-risks)). A low-effort way of implementing this at least partly would be that the donors can “label” their donation for a particular worldview, and the fund managers then try to take this into account informally with their grantmaking by talking to the experts holding that worldview at that point in time.
Relatedly, if we look at the current EA donor landscape, it seems that most expected funding for this fund will come from a single billionaire. It’s probably worth working with them directly and custom-tailoring the fund to them.
Open Phil’s committee mechanism might be helpful for the governance of your fund.
Thank you for the feedback!
Yes, we sent out both guidelines simultaneously. They link to each other. The post you’re referring to mentioned Nick’s guidelines in passing, but it seems readers got an incomplete / incorrect impression.
You mention beliefs, too; does this include suffering-focused views generally?
The guidelines talk about beliefs that are important to us in general. Suffering-focused views aren’t mentioned as a concrete example, but flawed futures and s-risks are.