Shower thought: A lot of the talking past each other that happens between vegan and non-vegan[1] EAs[2] might come from selection effects plus typical mind fallacy.[3]
Let’s say there are two types of people: type-A, for whom a vegan diet imposes little or no costs, and type-B, for whom a vegan diet imposes substantial costs (in things like health, productivity,[4] social life). My hunch is that most long-time vegans are type-A, while most type-B people who try going vegan bounce.
Now, to a type-A vegan who doesn’t realize type-B is a thing, those who claim that a vegan diet has costs are obviously lying. And to a type-B person who’s tried and failed to go vegan, vegans are obviously lying about how easy it is. (To put things crudely.)
I think an awareness from both sides of the vegan/non-vegan divide that the other type—B/A—exists, and that the other side is mostly made up of the other type, could go quite a long way toward circumventing frustrating or otherwise unproductive debates.
(I doubt this take is original to me, but I didn’t find it on this forum already with a quick search, so I figured I’d share.)
I expect the same argument to apply to vegetarians versus non-vegetarians, but for brevity I’ll just write “vegan” throughout. (I’m avoiding the veg*n shorthand just because I don’t find *’s aesthetically appealing in prose.)
I’ve specified “EAs” rather than just people in general because my argument works better if most non-vegans have actually tried to go vegan. My sense is that this is true for the EA population, but (very) not true for the general population.
My preferred definition of typical mind fallacy: The mistake of jumping to conclusions—often subconsciously—about other people’s experiences based on your own experiences. In other words, the mistake of assuming that other people are more like you than they really are. (Here’s the LessWrong Wiki’s definition.)
This doesn’t super resonate with my experience. I haven’t really seen anyone argue for “veganism is costly for everyone”. I feel like the debate has always been between “for some people veganism is very costly” and “veganism is very cheap for everyone (if they just try properly)”.
Like, it’s not like anyone is arguing that there should be no vegan food at EAG, or that all EAs should be carnivores. Maybe I am missing something here and there are places where people are talking past each other in the way you describe, but e.g. recent conversations with Elizabeth VN and others have been about trying to argue that being vegan is quite costly for some people (in terms of health in that case), not that it’s costly for all people, and many people seemed to disagree with that.
I agree with Will that differences in costs is a major driver of disagreement, but agree with Haybrka that it is not at all symmetric in public discussions. In public discussions I’ve only seen type B people accuse vegans of lying in regards to universal statements, not what they personally find it easy.
I’ll admit that this is less than total. Privately, I expect that some percent of type-As are wrong about how easy veganism is for them, and will develop problems at a later date. If I am talking 1:1 to a vegan experiencing chronic unexplained health issues, and all the obvious stuff has been ruled out, I will suggest nutritional interventions. I don’t see this as relevant to the public debate; there definitely are people for whom veganism is easy, I can’t guess who will turn out to be wrong about their personal difficulty, and establishing common knowledge of the variety is sufficient for public debate.
This seems pretty close to an universal claim, that high cognitive effort is not possible under veganism (or at least it’s an open question) for everyone. It’s not exactly saying that people don’t find it easy to be vegan, but that the people who are are deluded.
I think that post is better described as a question and personal anecdote, not a universal claim. That’s partially because the author does seem to be genuinely wondering, genuinely want data, and genuinely valuing animals; it would be easy for a similar post to look very disingenuous to me.
Meanwhile I count 2 comments dismissing anecdotes and personal experience, even when applied personally.
FWIW my personal experience doesn’t square with this. It was initially hard for me but after a transition period where I got accustomed to new foods, it got much easier. For most people—those who are medically able to do it—I think this would be the case.
Hmm, based on what you’ve said here—and I acknowledge that what you’ve said is a highly compressed version of your experience, thus I may well be failing to understand you (and I apologize in advance if I mischaracterize your experience)—I think I’m not quite seeing how this refutes my framing? I accept that my type-A/B framing rounds off a bunch of nuance, but to me, within that framing, it sounds like you’re type-A?
Like, I’m not sure how long the transition period was for you, and I expect different people’s transition periods will vary considerably, but my model, viewed through this lens, is that a type-A person will make it out of their transition period and be able to maintain a vegan diet thereafter at little to no cost. Whereas a type-B can spend weeks, months—even a year or more, like myself[1]—planning out and iterating on their vegan diet; making sure, through doing research, taking blood tests, and so on, that they’re avoiding the known pitfalls, and still never make it out of the transition period.[2][3]
I like this comment from Jason: “Nutritional research is hard, and we’d need a significantly stronger body of research (e.g., random assignment, very large samples) to say that a vegan diet is maximally healthful for everyone at an individual level (as opposed to healthier on the a [sic] population average).” (link)
Moreover, for me, the vegan experience actually got increasingly unpleasant with time, if anything, so I don’t think it’s the case that type-B’s will eventually asymptote onto becoming costlessly vegan if only they stick with it for long enough.
(Additionally, if asymptoting really does occur, but “long enough” means months or years, then I have sympathy for those who give up in the meantime.)
Sorry, I originally commented with a much more detailed account but decided I didn’t want so much personal info on the forum.
On my first attempt at vegetarianism I failed after about a week, and after that I decided to start with avoiding meat at home and at uni. The transition to being fully vegan took about 2.5 years. I was a picky eater so I had a lot of foods and ingredients to get used to. I also improved my cooking abilities a lot during this time.
Edit: it’s true that I’m now in a phase where it is almost costless for me to be vegan, and I’ve been in that state for years. My point is rather that I didn’t start off like that.
Figures on vegetarian/vegan recidivism indicate that a lot of people stop even after years of following that diet. ACE estimates that vegetarians stay vegetarian for about 5 years on average.
The Fauanalytics survey indicates quicker dropout: about a third drop out within 3 months, about half drop out within a year, and 84% drop out in total.
Thanks for the data! For other readers I’ll note the Faunalytics page you linked to contains more interesting information (e.g. a majority of lapsed vegns try it only for health reasons, while a majority of those who remain vegn do not).
The remainder of that distribution after the 1 year mark would also be interesting, as it might take over that to get accustomed to it.
This does suggest that a gradual transition might have higher success rates?
I agree with you that the degree of difficulty in going vegan is personal and quite variable. This is one of the reasons I have thought developing an easy way of offsetting through animal welfare donations for meat consumption could be a very effective program.
Looking back on leaving academia for EA-aligned research, here are two things I’m grateful for:
Being allowed to say why I believe something.
Being allowed to hold contradictory beliefs (i.e., think probabilistically).
In EA research, I can write: ‘Mortimer Snodgrass, last author on the Godchilla paper (Gopher et al., 2021), told me “[x, y, z]”.’
In academia, I had to find a previous paper to cite for any claim I made in my paper, even if I believed the claim because I heard it elsewhere. (Or, rather, I did the aforementioned for my supervisor’s papers—I used to be the research assistant who cherry picked citations off Google Scholar.)
In EA research, I can write, ‘I estimate that the model was produced in May 2021 (90% confidence interval: March–July 2021)’, or, ‘I’m about 70% confident in this claim’, and even, ‘This paper is more likely than not to contain an important error.’
In academia, I had to argue for a position, without conceding any ground. I had to be all-in on whatever I was claiming; I couldn’t give evidence and considerations for and against. (If I did raise a counterargument, it would be as setup for a counter-counterargument.)
That’s it. No further point to be made. I’m just grateful for my epistemic freedom nowadays.
One of my CERI fellows asked me to elaborate on a claim I made that was along the lines of,* “If AI timelines are shorter, then this makes (direct) nuclear risk work less important because the time during which nuclear weapons can wipe us out is shorter.”
There’s a general point here, I think, which isn’t limited to nuclear risk. Namely, AI timelines being shorter not only makes AI risk more important, but makes everything else less important. Because the time during which the other thing (whether that be an asteroid, engineered pandemic, nuclear war, nanotech-caused grey goo scenario, etc.) matters as a human-triggered x-risk is shortened.
To give the nuclear risk example:^
If TAI is 50 years away, and per-year risk of nuclear conflict is 0.5%, then risk of nuclear conflict before TAI is 1-(0.995^50) = 22%
If TAI is 15 years away, and per-year risk of nuclear conflict is 0.5%, then risk of nuclear conflict before TAI is 1-(0.995^15) = 7%
This does rely on the assumption that we’ll be playing a different ball game after TAI/AGI/HLMI arrives (if not, then there’s no particular reason to view TAI or similar as a cut-off point), but to me this different ball game assumption seems fair (see, e.g., Muehlhauser, 2019).
*My background thinking behind my claim here has been inspired by conversations with Michael Aird, though I’m not certain he’d agree with everything I’ve written in this shortform.
^A couple of not-that-important caveats:
“Before TAI” refers to the default arrival time of TAI if nuclear conflict does not happen.
The simple calculations I’ve performed assume mutual independence between nuclear-risk-in-given-year-x and nuclear-risk-in-given-year-y.
*My background thinking behind my claim here has been inspired by conversations with Michael Aird, though I’m not certain he’d agree with everything I’ve written in this shortform.
From a skim, I agree with everything in this shortform and think it’s important, except maybe “to me this different ball game assumption seems fair”.
I’d say this “different ball game” assumption seems at least 50% likely to be at least roughly true. But—at least given the current limits of my knowledge and thinking—it doesn’t seem 99% likely to be almost entirely true, and I think the chance it may be somewhat or very untrue should factor into our cause prioritisation & our strategies. (But maybe that’s what you meant by “seems fair”.)
I expand on this in this somewhat longwinded comment. I’ll copy that in a reply here for convenience. (See the link for Ajeya Cotra replying and me replying to that.)
My comment on Ajeya Cotra’s AMA, from Feb 2021 (so probably I’d write it differently today):
“[I’m not sure if you’ve thought about the following sort of question much. Also, I haven’t properly read your report—let me know if this is covered in there.]
I’m interested in a question along the lines of “Do you think some work done before TAI is developed matters in a predictable way—i.e., better than 0 value in expectation—for its effects on the post-TAI world, in ways that don’t just flow through how the work affects the pre-TAI world or how the TAI transition itself plays out? If so, to what extent? And what sort of work?”
An example to illustrate: “Let’s say TAI is developed in 2050, and the ‘TAI transition’ is basically ‘done’ by 2060. Could some work to improve institutional decision-making be useful in terms of how it affects what happens from 2060 onwards, and not just via reducing x-risk (or reducing suffering etc.) before 2060 and improving how the TAI transition goes?”
But I’m not sure it’s obvious what I mean by the above, so here’s my attempt to explain:
The question of when TAI will be developed[1] is clearly very important to a whole bunch of prioritisation questions. One reason is that TAI—and probably the systems leading up to it—will very substantially change how many aspects of how society works. Specifically, Open Phil has defined TAI as “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution” (and Muehlhauser has provided some more detail on what is meant by that).
But I think some EAs implicitly assume something stronger, along the lines of:
The expected moral value of actions we take now is entirely based on those actions’ effects on what happens before TAI is developed and those actions’ effects on the development, deployment, etc. of TAI. That is, the expected value of the actions we take now is not partly based on how the actions affect aspects of the post-TAI world in ways unrelated to how TAI is developed, deployed, etc. This is either because we just can’t at all predict those effects or because those effects wouldn’t be important; the world will just be very shaken up and perhaps unrecognisable, and any effects of pre-TAI actions will be washed out unless they affect how the TAI transition occurs.
E.g., things we do now to improve institutional decision-making or reduce risks of war can matter inasmuch as they reduce risks before TAI and reduce risks from TAI (and maybe also reduce actual harms, increase benefits, etc.). But they’ll have no even-slightly-predictable or substantial effect on decision-making or risks of war in the post-TAI world.
But I don’t think that necessarily follows from how TAI is defined. E.g., various countries, religious, ideologies, political systems, technologies, etc., existed both before the Industrial Revolution and for decades/centuries afterwards. And it seems like some pre-Industrial-Revolution actions—e.g. people who pushed for democracy or the abolition of slavery—had effects on the post-Industrial-Revolution world that were probably predictably positive in advance and that weren’t just about affecting how the Industrial Revolution itself occurred.
(Though it may have still been extremely useful for people taking those actions to know that, when, where, and how the IR would occur, e.g. because then they could push for democracy and abolition in the countries that were about to become much more influential and powerful.)
So I’m tentatively inclined to think that some EAs are assuming that short timelines pushes against certain types of work more than it really does, and that certain (often “broad”) interventions could be in expectation useful for influencing the post-TAI world in a relatively “continuous” way. In other words, I’m inclined to thinks there might be less of an extremely abrupt “break” than some people seem to think, even if TAI occurs. (Though it’d still be quite extreme by many standards, just as the Industrial Revolution was.)
[1] Here I’m assuming TAI will be developed, which is questionable, though it seems to me pretty much guaranteed unless some existential catastrophe occurs beforehand.”
Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?”
Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies have a lot of it? all of it?), whether some of the scientific advances might backfire (e.g., a vulnerable world scenario or a race-to-the-bottom digital minds takeoff), and how exactly AGI will be used for “the benefit of all.”
Those interested: head over here. You can participate by:
Forecasting
Commenting
Comments are especially valuable on long-term questions, because the forecasting community has less of a track record at these time scales.[2][3]
Writing questions
There may well be some gaps in the admin-created question set.[4] We welcome question contributions from users.
The focus week will likely be followed by an essay contest, since a large part of the value in this initiative, we believe, lies in generating concrete stories for how the future might play out (and for what the inflection points might be). More details to come.[5]
This is not to say that we firmly believe extinction won’t happen. I personally put p(doom) at around 60%. At the same time, however, as I have previouslywritten, I believe that more important trajectory changes lie ahead if humanity does manage to avoid extinction, and that it is worth planning for these things now.
With short-term questions on things like geopolitics, I think one should just basically defer to the Community Prediction. Conversely, with certain long-term questions I believe it’s important to interrogate how forecasters are reasoning about the issue at hand before assigning their predictions too much weight. Forecasters can help themselves by writing comments that explain their reasoning.
In addition, stakeholders we work with, who look at our questions with a view to informing their grantmaking, policymaking, etc., frequently say that they would find more comments valuable in helping bring context to the Community Prediction.
Update: I ended up leaving Metaculus fairly soon after writing this post. I think that means the essay contest is less likely to happen, but I guess stay tuned in case it does.
One thing the AI Pause Debate Week has made salient to me: there appears to be a mismatch between the kind of slowing that on-the-ground AI policy folks talk about, versus the type that AI policy researchers and technical alignment people talk about.
My impression from talking to policy folks who are in or close to government—admittedly a sample of only five or so—is that the main[1] coordination problem for reducing AI x-risk is about ensuring the so-called alignment tax gets paid (i.e., ensuring that all the big labs put some time/money/effort into safety, and that none “defect” by skimping on safety to jump ahead on capabilities). This seems to rest on the assumption that the alignment tax is a coherent notion and that technical alignment people are somewhat on track to pay this tax.
On the other hand, my impression is that technical alignment people, and AI policy researchers at EA-oriented orgs,[2] are not at all confident in there being a viable level of time/money/effort that will produce safe AGI on the default trajectory. The type of policy action that’s needed, so they seem to say, is much more drastic. For example, something in the vein of global coordination to slow, limit, or outright stop development and deployment of AI capabilities (see, e.g., Larsen’s,[3]Bensinger’s, and Stein-Perlman’s debate week posts), whilst alignment researchers scramble to figure out how on earth to align frontier systems.
I’m concerned by this mismatch. It would appear that the game plans of two adjacent clusters of people working to reduce AI x-risk are at odds. (Clearly, this is an oversimplification and there are a range of takes from within both clusters, but my current epistemic status is that this oversimplification gestures at a true and important pattern.)
Am I simply mistaken about there being a mismatch here? If not, is anyone working to remedy the situation? Or does anyone have thoughts on how this arose, how it could be rectified, or how to prevent similar mismatches from arising in the future?
I’m aware that Larsen recently crossed over into writing policy bills, but I’m counting them as a technical person on account of their technical background and their time spent in the Berkeley sphere of technical alignment people. Nonetheless, perhaps crossovers like this are a good omen for policy and technical people getting onto the same page.
Such a gap would unambiguously be worth analyzing, but probably not in a public forum post imo (especially because it’s probably already being done by people privately).
There have been times in the past (e.g., here) when I’ve wished there were a reaction feature, and I agree with the LessWrong post’s thesis that a reaction feature would positively shape forum culture.
Hi Will, we’re playing with some designs for reactions now. One question we have is whether to introduce reactions at the comment level or the post level. Do you have any gut takes on that?
Some pros and cons to introducing reactions at the post level:
Pros
It’d be nice to see positive reactions to your post from people you respect.
Heightened sense of community(?)
Cons
This’d probably make the EA Forum look less serious.
Some of the epistemic status reactions (from the LessWrong post) only really make sense at the comment level. For example, “Too Harsh” and “Missed the Point”.
I’m guessing this wouldn’t be too hard to fix, though.
It’d seem inconsistent if reactions appear at the post level, whereas agreement karma only exists at the comment level?
Inconsistent, that is, if one views karma as the “core thing” and agreement karma and reactions as additional features. (It’s not inconsistent if one views reactions as a core thing, alongside karma and above agreement karma.)
Having thought about this for five minutes or so, I think that the EA Forum looking less serious is the most important of the above considerations. Thus, my current take is that I’m in favor of reactions being introduced only at the comment level.
Also, zooming out to the meta level: is there a channel for giving feedback and suggestions on Forum design/features? I have some other hot takes that I’d be happy to share.
Interesting, thanks for your takes. One of the pros that we’ve been most excited about is sharing positive feedback beyond karma back with authors (some combination of your pros). The “serious” culture is super valuable, but also has the effect of scaring people away from posting their ideas, so we’re thinking about what the right balance is.
Anyway, thanks for your takes! We’ll probably post some ideas in the next week for more feedback.
Yeah, to clarify why I think some seriousness is important: for a number of people and orgs, this forum is the place they publish their research. Some fraction of this research will be cited outside of the EA Forum, and my guess is that non-EAs may view this research as less credible if there are, for example, smiley face reaccs alongside the title.
Nonetheless, I now think I’m leaning toward post-level reactions. Your point about sharing positive feedback back with authors is salient, in my view, and I also expect that there are viable workarounds to my seriousness objection. For instance, having epistemic status reacts (but not face emoji reacts) at the post level might get the best of both—feedback and seriousness—worlds.
(Of course, I’m just one dude with ~zero UI experience, so feel free to weight my take accordingly.)
The EA funding pool is large, but not infinite. This statement is nothing to write home about, but I’ve noticed quite a few EAs I talk to view longtermist/x-risk EA funding as effectively infinite, the notion being that we’re severely bottlenecked by good funding opportunities.
I think this might be erroneous.
Here are some areas that could plausibly absorb all EA funding, right now:
Biorisk
Better sequencing
Better surveillance
Developing and deploying PPE
Large-scale philanthropic response to a pandemic
AI risk
Policy spending (especially in the US)
AI chips
either scaling up chip production, or buying up top-of-the-range chips
Backing the lab(s) that we might want to get to TAI/AGI/HLMI/PASTA first
(Note: I’m definitely not saying we should fund these things, but I am pointing out that there are large funding opportunities out there which potentially meet the funding bar. For what it’s worth, my true thinking is something closer to: “We should reserve most of our funding for shaping TAI come crunch time, and/or once we have better strategic clarity.”
Note also: Perhaps some, or all, of these don’t actually work, and perhaps there are many more examples I’m missing—I only spent ~3 mins brainstorming the above. I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was.)
Hmm, it feels unclear to me what you’re claiming here. In particular, I’m not sure which of the following is your claim:
“Right now all money committed to EA could be spent on things that we currently (should) think are at least slightly net positive in expectation. (Even if we maybe shouldn’t spend on those things yet, since maybe we should wait for even better opportunities.)”
“Right now all money committed to EA could be spent on things that might be net positive in expectation. (But there aren’t enough identified opportunities that we currently think are net positive to absorb all current EA money. Some of the things currently look net negative but with high uncertainty, and we need to do further research or wait till things naturally become closer and clearer to find out which are net positive. We also need to find more opportunities.)”
1 is a stronger and more interesting claim than 2. But you don’t seem to make it clear which one you’re saying.
If 2 is true, then we still are “severely bottlenecked by good funding opportunities” + by strategic clarity. So it might be that the people you’re talking to are already thinking 2, rather than that EA funding is effectively infinite?
To be clear, I do think 2 is importantly different from “we have effectively infinite money”, in particular in that it pushes in favor of not spending on extremely slightly net positive funding opportunities now since we want to save money for when we’ve learned more about which of the known maybe-good huge funding opportunties are good.* So if there are people acting and thinking as though we have effectively infinite money, I do think they should get ~this message. But I think your shortform could maybe benefit from distinguishing 1 and 2.
(Also, a nit-picky point: I’d suggest avoiding phrasing like “could plausibly absorb all EA funding” without a word like “productively”, since of course there are things that can literally just absorb our funding—literally just spending is easy.)
*E.g., personally I think trying to spend >$1b in 2023 on each of the AI things you mentioned would probably require spending on some net negative in expectation things, but I also think that we should keep those ideas in mind for future and spend a bit slower on other things for that reason.
I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was
I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was.
Maybe it was some combination of the posts with the megaprojects tag?
I just came across this old comment by Wei Dai which has aged well, for unfortunate reasons.
I think a healthy dose of moral uncertainty (and normative uncertainty in general) is really important to have, because it seems pretty easy for any ethical/social movement to become fanatical or to incur a radical element, and end up doing damage to itself, its members, or society at large. (“The road to hell is paved with good intentions” and all that.)
I think there’s something off about the view that we need to be uncertain about morality to not become fanatic maniacs who are a danger to other people. It’s perfectly possible to have firm/confident moral views that are respectful of other people having different life goals from one’s own. Just don’t be a moral realist utilitarian. The problem is moral realism + utilitarianism, not having confident takes on your morality.
Another way to say this is that it seems dangerously fragile if the only reason one doesn’t become a maniac is moral uncertainty. What if you feel like you’re becoming increasingly confident about some moral view? It tends to happen to people.
Shower thought: A lot of the talking past each other that happens between vegan and non-vegan[1] EAs[2] might come from selection effects plus typical mind fallacy.[3]
Let’s say there are two types of people: type-A, for whom a vegan diet imposes little or no costs, and type-B, for whom a vegan diet imposes substantial costs (in things like health, productivity,[4] social life). My hunch is that most long-time vegans are type-A, while most type-B people who try going vegan bounce.
Now, to a type-A vegan who doesn’t realize type-B is a thing, those who claim that a vegan diet has costs are obviously lying. And to a type-B person who’s tried and failed to go vegan, vegans are obviously lying about how easy it is. (To put things crudely.)
I think an awareness from both sides of the vegan/non-vegan divide that the other type—B/A—exists, and that the other side is mostly made up of the other type, could go quite a long way toward circumventing frustrating or otherwise unproductive debates.
(I doubt this take is original to me, but I didn’t find it on this forum already with a quick search, so I figured I’d share.)
I expect the same argument to apply to vegetarians versus non-vegetarians, but for brevity I’ll just write “vegan” throughout. (I’m avoiding the veg*n shorthand just because I don’t find *’s aesthetically appealing in prose.)
I’ve specified “EAs” rather than just people in general because my argument works better if most non-vegans have actually tried to go vegan. My sense is that this is true for the EA population, but (very) not true for the general population.
My preferred definition of typical mind fallacy: The mistake of jumping to conclusions—often subconsciously—about other people’s experiences based on your own experiences. In other words, the mistake of assuming that other people are more like you than they really are. (Here’s the LessWrong Wiki’s definition.)
I’ve previously written about my experience here.
This doesn’t super resonate with my experience. I haven’t really seen anyone argue for “veganism is costly for everyone”. I feel like the debate has always been between “for some people veganism is very costly” and “veganism is very cheap for everyone (if they just try properly)”.
Like, it’s not like anyone is arguing that there should be no vegan food at EAG, or that all EAs should be carnivores. Maybe I am missing something here and there are places where people are talking past each other in the way you describe, but e.g. recent conversations with Elizabeth VN and others have been about trying to argue that being vegan is quite costly for some people (in terms of health in that case), not that it’s costly for all people, and many people seemed to disagree with that.
I agree with Will that differences in costs is a major driver of disagreement, but agree with Haybrka that it is not at all symmetric in public discussions. In public discussions I’ve only seen type B people accuse vegans of lying in regards to universal statements, not what they personally find it easy.
I’ll admit that this is less than total. Privately, I expect that some percent of type-As are wrong about how easy veganism is for them, and will develop problems at a later date. If I am talking 1:1 to a vegan experiencing chronic unexplained health issues, and all the obvious stuff has been ruled out, I will suggest nutritional interventions. I don’t see this as relevant to the public debate; there definitely are people for whom veganism is easy, I can’t guess who will turn out to be wrong about their personal difficulty, and establishing common knowledge of the variety is sufficient for public debate.
This seems pretty close to an universal claim, that high cognitive effort is not possible under veganism (or at least it’s an open question) for everyone. It’s not exactly saying that people don’t find it easy to be vegan, but that the people who are are deluded.
I think that post is better described as a question and personal anecdote, not a universal claim. That’s partially because the author does seem to be genuinely wondering, genuinely want data, and genuinely valuing animals; it would be easy for a similar post to look very disingenuous to me.
Meanwhile I count 2 comments dismissing anecdotes and personal experience, even when applied personally.
Yeah, that’s fair. It is about performance, not effort, but does seem closer to a universal claim.
Thanks for your comments, both. I agree that the personal versus universal statements distinction is noteworthy (and missing from my take above).
Probably right, and also applies to “high-donating” vs “low-donating” EAs.
FWIW my personal experience doesn’t square with this. It was initially hard for me but after a transition period where I got accustomed to new foods, it got much easier. For most people—those who are medically able to do it—I think this would be the case.
Hmm, based on what you’ve said here—and I acknowledge that what you’ve said is a highly compressed version of your experience, thus I may well be failing to understand you (and I apologize in advance if I mischaracterize your experience)—I think I’m not quite seeing how this refutes my framing? I accept that my type-A/B framing rounds off a bunch of nuance, but to me, within that framing, it sounds like you’re type-A?
Like, I’m not sure how long the transition period was for you, and I expect different people’s transition periods will vary considerably, but my model, viewed through this lens, is that a type-A person will make it out of their transition period and be able to maintain a vegan diet thereafter at little to no cost. Whereas a type-B can spend weeks, months—even a year or more, like myself[1]—planning out and iterating on their vegan diet; making sure, through doing research, taking blood tests, and so on, that they’re avoiding the known pitfalls, and still never make it out of the transition period.[2][3]
I’ve written about my experience here.
I like this comment from Jason: “Nutritional research is hard, and we’d need a significantly stronger body of research (e.g., random assignment, very large samples) to say that a vegan diet is maximally healthful for everyone at an individual level (as opposed to healthier on the a [sic] population average).” (link)
Moreover, for me, the vegan experience actually got increasingly unpleasant with time, if anything, so I don’t think it’s the case that type-B’s will eventually asymptote onto becoming costlessly vegan if only they stick with it for long enough.
(Additionally, if asymptoting really does occur, but “long enough” means months or years, then I have sympathy for those who give up in the meantime.)
Sorry, I originally commented with a much more detailed account but decided I didn’t want so much personal info on the forum.
On my first attempt at vegetarianism I failed after about a week, and after that I decided to start with avoiding meat at home and at uni. The transition to being fully vegan took about 2.5 years. I was a picky eater so I had a lot of foods and ingredients to get used to. I also improved my cooking abilities a lot during this time.
Edit: it’s true that I’m now in a phase where it is almost costless for me to be vegan, and I’ve been in that state for years. My point is rather that I didn’t start off like that.
Figures on vegetarian/vegan recidivism indicate that a lot of people stop even after years of following that diet. ACE estimates that vegetarians stay vegetarian for about 5 years on average.
The Fauanalytics survey indicates quicker dropout: about a third drop out within 3 months, about half drop out within a year, and 84% drop out in total.
Thanks for the data! For other readers I’ll note the Faunalytics page you linked to contains more interesting information (e.g. a majority of lapsed vegns try it only for health reasons, while a majority of those who remain vegn do not).
The remainder of that distribution after the 1 year mark would also be interesting, as it might take over that to get accustomed to it.
This does suggest that a gradual transition might have higher success rates?
I agree with you that the degree of difficulty in going vegan is personal and quite variable. This is one of the reasons I have thought developing an easy way of offsetting through animal welfare donations for meat consumption could be a very effective program.
Looking back on leaving academia for EA-aligned research, here are two things I’m grateful for:
Being allowed to say why I believe something.
Being allowed to hold contradictory beliefs (i.e., think probabilistically).
In EA research, I can write: ‘Mortimer Snodgrass, last author on the Godchilla paper (Gopher et al., 2021), told me “[x, y, z]”.’
In academia, I had to find a previous paper to cite for any claim I made in my paper, even if I believed the claim because I heard it elsewhere. (Or, rather, I did the aforementioned for my supervisor’s papers—I used to be the research assistant who cherry picked citations off Google Scholar.)
In EA research, I can write, ‘I estimate that the model was produced in May 2021 (90% confidence interval: March–July 2021)’, or, ‘I’m about 70% confident in this claim’, and even, ‘This paper is more likely than not to contain an important error.’
In academia, I had to argue for a position, without conceding any ground. I had to be all-in on whatever I was claiming; I couldn’t give evidence and considerations for and against. (If I did raise a counterargument, it would be as setup for a counter-counterargument.)
That’s it. No further point to be made. I’m just grateful for my epistemic freedom nowadays.
TAI makes everything else less important.
One of my CERI fellows asked me to elaborate on a claim I made that was along the lines of,* “If AI timelines are shorter, then this makes (direct) nuclear risk work less important because the time during which nuclear weapons can wipe us out is shorter.”
There’s a general point here, I think, which isn’t limited to nuclear risk. Namely, AI timelines being shorter not only makes AI risk more important, but makes everything else less important. Because the time during which the other thing (whether that be an asteroid, engineered pandemic, nuclear war, nanotech-caused grey goo scenario, etc.) matters as a human-triggered x-risk is shortened.
To give the nuclear risk example:^
If TAI is 50 years away, and per-year risk of nuclear conflict is 0.5%, then risk of nuclear conflict before TAI is 1-(0.995^50) = 22%
If TAI is 15 years away, and per-year risk of nuclear conflict is 0.5%, then risk of nuclear conflict before TAI is 1-(0.995^15) = 7%
This does rely on the assumption that we’ll be playing a different ball game after TAI/AGI/HLMI arrives (if not, then there’s no particular reason to view TAI or similar as a cut-off point), but to me this different ball game assumption seems fair (see, e.g., Muehlhauser, 2019).
*My background thinking behind my claim here has been inspired by conversations with Michael Aird, though I’m not certain he’d agree with everything I’ve written in this shortform.
^A couple of not-that-important caveats:
“Before TAI” refers to the default arrival time of TAI if nuclear conflict does not happen.
The simple calculations I’ve performed assume mutual independence between nuclear-risk-in-given-year-x and nuclear-risk-in-given-year-y.
From a skim, I agree with everything in this shortform and think it’s important, except maybe “to me this different ball game assumption seems fair”.
I’d say this “different ball game” assumption seems at least 50% likely to be at least roughly true. But—at least given the current limits of my knowledge and thinking—it doesn’t seem 99% likely to be almost entirely true, and I think the chance it may be somewhat or very untrue should factor into our cause prioritisation & our strategies. (But maybe that’s what you meant by “seems fair”.)
I expand on this in this somewhat longwinded comment. I’ll copy that in a reply here for convenience. (See the link for Ajeya Cotra replying and me replying to that.)
My comment on Ajeya Cotra’s AMA, from Feb 2021 (so probably I’d write it differently today):
“[I’m not sure if you’ve thought about the following sort of question much. Also, I haven’t properly read your report—let me know if this is covered in there.]
I’m interested in a question along the lines of “Do you think some work done before TAI is developed matters in a predictable way—i.e., better than 0 value in expectation—for its effects on the post-TAI world, in ways that don’t just flow through how the work affects the pre-TAI world or how the TAI transition itself plays out? If so, to what extent? And what sort of work?”
An example to illustrate: “Let’s say TAI is developed in 2050, and the ‘TAI transition’ is basically ‘done’ by 2060. Could some work to improve institutional decision-making be useful in terms of how it affects what happens from 2060 onwards, and not just via reducing x-risk (or reducing suffering etc.) before 2060 and improving how the TAI transition goes?”
But I’m not sure it’s obvious what I mean by the above, so here’s my attempt to explain:
The question of when TAI will be developed[1] is clearly very important to a whole bunch of prioritisation questions. One reason is that TAI—and probably the systems leading up to it—will very substantially change how many aspects of how society works. Specifically, Open Phil has defined TAI as “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution” (and Muehlhauser has provided some more detail on what is meant by that).
But I think some EAs implicitly assume something stronger, along the lines of:
But I don’t think that necessarily follows from how TAI is defined. E.g., various countries, religious, ideologies, political systems, technologies, etc., existed both before the Industrial Revolution and for decades/centuries afterwards. And it seems like some pre-Industrial-Revolution actions—e.g. people who pushed for democracy or the abolition of slavery—had effects on the post-Industrial-Revolution world that were probably predictably positive in advance and that weren’t just about affecting how the Industrial Revolution itself occurred.
(Though it may have still been extremely useful for people taking those actions to know that, when, where, and how the IR would occur, e.g. because then they could push for democracy and abolition in the countries that were about to become much more influential and powerful.)
So I’m tentatively inclined to think that some EAs are assuming that short timelines pushes against certain types of work more than it really does, and that certain (often “broad”) interventions could be in expectation useful for influencing the post-TAI world in a relatively “continuous” way. In other words, I’m inclined to thinks there might be less of an extremely abrupt “break” than some people seem to think, even if TAI occurs. (Though it’d still be quite extreme by many standards, just as the Industrial Revolution was.)
[1] Here I’m assuming TAI will be developed, which is questionable, though it seems to me pretty much guaranteed unless some existential catastrophe occurs beforehand.”
‘Five Years After AGI’ Focus Week happening over at Metaculus.
Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?”
Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies have a lot of it? all of it?), whether some of the scientific advances might backfire (e.g., a vulnerable world scenario or a race-to-the-bottom digital minds takeoff), and how exactly AGI will be used for “the benefit of all.”
Forecasting questions for the week range from “Percentage living in poverty?” to “Nuclear deterrence undermined?” to “‘Long reflection’ underway?”
Those interested: head over here. You can participate by:
Forecasting
Commenting
Comments are especially valuable on long-term questions, because the forecasting community has less of a track record at these time scales.[2][3]
Writing questions
There may well be some gaps in the admin-created question set.[4] We welcome question contributions from users.
The focus week will likely be followed by an essay contest, since a large part of the value in this initiative, we believe, lies in generating concrete stories for how the future might play out (and for what the inflection points might be). More details to come.[5]
This is not to say that we firmly believe extinction won’t happen. I personally put p(doom) at around 60%. At the same time, however, as I have previouslywritten, I believe that more important trajectory changes lie ahead if humanity does manage to avoid extinction, and that it is worth planning for these things now.
Moreover, I personally take Nuño Sempere’s “Hurdles of using forecasting as a tool for making sense of AI progress” piece seriously, especially the “Excellent forecasters and Superforecasters™ have an imperfect fit for long-term questions” part.
With short-term questions on things like geopolitics, I think one should just basically defer to the Community Prediction. Conversely, with certain long-term questions I believe it’s important to interrogate how forecasters are reasoning about the issue at hand before assigning their predictions too much weight. Forecasters can help themselves by writing comments that explain their reasoning.
In addition, stakeholders we work with, who look at our questions with a view to informing their grantmaking, policymaking, etc., frequently say that they would find more comments valuable in helping bring context to the Community Prediction.
All blame on me, if so.
Update: I ended up leaving Metaculus fairly soon after writing this post. I think that means the essay contest is less likely to happen, but I guess stay tuned in case it does.
One thing the AI Pause Debate Week has made salient to me: there appears to be a mismatch between the kind of slowing that on-the-ground AI policy folks talk about, versus the type that AI policy researchers and technical alignment people talk about.
My impression from talking to policy folks who are in or close to government—admittedly a sample of only five or so—is that the main[1] coordination problem for reducing AI x-risk is about ensuring the so-called alignment tax gets paid (i.e., ensuring that all the big labs put some time/money/effort into safety, and that none “defect” by skimping on safety to jump ahead on capabilities). This seems to rest on the assumption that the alignment tax is a coherent notion and that technical alignment people are somewhat on track to pay this tax.
On the other hand, my impression is that technical alignment people, and AI policy researchers at EA-oriented orgs,[2] are not at all confident in there being a viable level of time/money/effort that will produce safe AGI on the default trajectory. The type of policy action that’s needed, so they seem to say, is much more drastic. For example, something in the vein of global coordination to slow, limit, or outright stop development and deployment of AI capabilities (see, e.g., Larsen’s,[3] Bensinger’s, and Stein-Perlman’s debate week posts), whilst alignment researchers scramble to figure out how on earth to align frontier systems.
I’m concerned by this mismatch. It would appear that the game plans of two adjacent clusters of people working to reduce AI x-risk are at odds. (Clearly, this is an oversimplification and there are a range of takes from within both clusters, but my current epistemic status is that this oversimplification gestures at a true and important pattern.)
Am I simply mistaken about there being a mismatch here? If not, is anyone working to remedy the situation? Or does anyone have thoughts on how this arose, how it could be rectified, or how to prevent similar mismatches from arising in the future?
In the USA, this main is served with a hearty side order of “Let’s make sure China in particular never races ahead on capabilities.”
e.g., Rethink Priorities, AI Impacts
I’m aware that Larsen recently crossed over into writing policy bills, but I’m counting them as a technical person on account of their technical background and their time spent in the Berkeley sphere of technical alignment people. Nonetheless, perhaps crossovers like this are a good omen for policy and technical people getting onto the same page.
I think it’s valuable to note that the type of people who do well in government are a specific type of person with a specific type of approach to reality, and they are spending many hours of the day in a completely different mindset (buried in a less-nerdy, more toxic environment) than most people in and around EA (buried in a more-nerdy, less toxic environment). A culture of futility is very pervasive in government and possibly important in order to do well at all. People in government roles are pretty far out-of-distribution relative to EA as a whole and possibly also potentially have a biased view of government as well due to higher access to jobs in parts of government with higher turnover and lower patriotism, even if those specific parts aren’t very representative of the parts that matter. Of course, it’s also possible that they got where they are mostly because they’re just that good.
Such a gap would unambiguously be worth analyzing, but probably not in a public forum post imo (especially because it’s probably already being done by people privately).
I’d heart react if this forum introduced reactions.[1]
There have been times in the past (e.g., here) when I’ve wished there were a reaction feature, and I agree with the LessWrong post’s thesis that a reaction feature would positively shape forum culture.
Hi Will, we’re playing with some designs for reactions now. One question we have is whether to introduce reactions at the comment level or the post level. Do you have any gut takes on that?
That’s great news!
Some pros and cons to introducing reactions at the post level:
Pros
It’d be nice to see positive reactions to your post from people you respect.
Heightened sense of community(?)
Cons
This’d probably make the EA Forum look less serious.
Some of the epistemic status reactions (from the LessWrong post) only really make sense at the comment level. For example, “Too Harsh” and “Missed the Point”.
I’m guessing this wouldn’t be too hard to fix, though.
It’d seem inconsistent if reactions appear at the post level, whereas agreement karma only exists at the comment level?
Inconsistent, that is, if one views karma as the “core thing” and agreement karma and reactions as additional features. (It’s not inconsistent if one views reactions as a core thing, alongside karma and above agreement karma.)
Having thought about this for five minutes or so, I think that the EA Forum looking less serious is the most important of the above considerations. Thus, my current take is that I’m in favor of reactions being introduced only at the comment level.
Also, zooming out to the meta level: is there a channel for giving feedback and suggestions on Forum design/features? I have some other hot takes that I’d be happy to share.
Interesting, thanks for your takes. One of the pros that we’ve been most excited about is sharing positive feedback beyond karma back with authors (some combination of your pros). The “serious” culture is super valuable, but also has the effect of scaring people away from posting their ideas, so we’re thinking about what the right balance is.
Anyway, thanks for your takes! We’ll probably post some ideas in the next week for more feedback.
You can give feature suggestions here any time.
Yeah, to clarify why I think some seriousness is important: for a number of people and orgs, this forum is the place they publish their research. Some fraction of this research will be cited outside of the EA Forum, and my guess is that non-EAs may view this research as less credible if there are, for example, smiley face reaccs alongside the title.
Nonetheless, I now think I’m leaning toward post-level reactions. Your point about sharing positive feedback back with authors is salient, in my view, and I also expect that there are viable workarounds to my seriousness objection. For instance, having epistemic status reacts (but not face emoji reacts) at the post level might get the best of both—feedback and seriousness—worlds.
(Of course, I’m just one dude with ~zero UI experience, so feel free to weight my take accordingly.)
Great, thanks!
We could spend all longtermist EA money, now.
(This is a sort-of sequel to my previous shortform.)
The EA funding pool is large, but not infinite. This statement is nothing to write home about, but I’ve noticed quite a few EAs I talk to view longtermist/x-risk EA funding as effectively infinite, the notion being that we’re severely bottlenecked by good funding opportunities.
I think this might be erroneous.
Here are some areas that could plausibly absorb all EA funding, right now:
Biorisk
Better sequencing
Better surveillance
Developing and deploying PPE
Large-scale philanthropic response to a pandemic
AI risk
Policy spending (especially in the US)
AI chips
either scaling up chip production, or buying up top-of-the-range chips
Backing the lab(s) that we might want to get to TAI/AGI/HLMI/PASTA first
(Note: I’m definitely not saying we should fund these things, but I am pointing out that there are large funding opportunities out there which potentially meet the funding bar. For what it’s worth, my true thinking is something closer to: “We should reserve most of our funding for shaping TAI come crunch time, and/or once we have better strategic clarity.”
Note also: Perhaps some, or all, of these don’t actually work, and perhaps there are many more examples I’m missing—I only spent ~3 mins brainstorming the above. I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was.)
Hmm, it feels unclear to me what you’re claiming here. In particular, I’m not sure which of the following is your claim:
“Right now all money committed to EA could be spent on things that we currently (should) think are at least slightly net positive in expectation. (Even if we maybe shouldn’t spend on those things yet, since maybe we should wait for even better opportunities.)”
“Right now all money committed to EA could be spent on things that might be net positive in expectation. (But there aren’t enough identified opportunities that we currently think are net positive to absorb all current EA money. Some of the things currently look net negative but with high uncertainty, and we need to do further research or wait till things naturally become closer and clearer to find out which are net positive. We also need to find more opportunities.)”
1 is a stronger and more interesting claim than 2. But you don’t seem to make it clear which one you’re saying.
If 2 is true, then we still are “severely bottlenecked by good funding opportunities” + by strategic clarity. So it might be that the people you’re talking to are already thinking 2, rather than that EA funding is effectively infinite?
To be clear, I do think 2 is importantly different from “we have effectively infinite money”, in particular in that it pushes in favor of not spending on extremely slightly net positive funding opportunities now since we want to save money for when we’ve learned more about which of the known maybe-good huge funding opportunties are good.* So if there are people acting and thinking as though we have effectively infinite money, I do think they should get ~this message. But I think your shortform could maybe benefit from distinguishing 1 and 2.
(Also, a nit-picky point: I’d suggest avoiding phrasing like “could plausibly absorb all EA funding” without a word like “productively”, since of course there are things that can literally just absorb our funding—literally just spending is easy.)
*E.g., personally I think trying to spend >$1b in 2023 on each of the AI things you mentioned would probably require spending on some net negative in expectation things, but I also think that we should keep those ideas in mind for future and spend a bit slower on other things for that reason.
Perhaps thinking of this post?
Maybe it was some combination of the posts with the megaprojects tag?
I just came across this old comment by Wei Dai which has aged well, for unfortunate reasons.
I think there’s something off about the view that we need to be uncertain about morality to not become fanatic maniacs who are a danger to other people. It’s perfectly possible to have firm/confident moral views that are respectful of other people having different life goals from one’s own. Just don’t be a moral realist utilitarian. The problem is moral realism + utilitarianism, not having confident takes on your morality.
Another way to say this is that it seems dangerously fragile if the only reason one doesn’t become a maniac is moral uncertainty. What if you feel like you’re becoming increasingly confident about some moral view? It tends to happen to people.
Strong agree—there are so many ways to go off the rails even if you’re prioritizing being super humble and weak[1]
“weak” i.e. in the usage “strong views weakly held”