Some criticism of the EA Virtual Programs introductory fellowship syllabus:
I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.
I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitarianism.
What explains this omission? A few possibilities:
The people who created the fellowship syllabus aren’t very familiar with s-risks and the possible relevance of animal advocacy to longtermism.
This seems plausible to me. I think founder effects heavily influence EA, and the big figures in mainstream EA don’t seem to discuss these ideas very much.
These topics seem too weird for an introductory fellowship.
It’s true that a lot of s-risk scenarios are weird. But there’s always some trade-off that we have to make between mainstream palatability and potential for impact. The inclusion of x-risks shows that we are willing to make this trade-off when the ideas discussed are important. To justify the exclusion of s-risks, the weirdness-to-impact ratio would have to be much larger. This might be true of particular s-risk scenarios, but even so, general discussions of future suffering need not reference these weirder scenarios. It could also make sense to include discussion of s-risks as optional reading (so as to avoid turning off people who are less open-minded).
The possible relevance of animal advocacy to longtermism does not strike me as any weirder than the discussion of factory farming, and the omission of this material makes longtermism seem very anthropocentric. (I think we could also improve on this by referring to the long-term future using terms like “The Future of Life” rather than “The Future of Humanity.”)
More generally, I think that the EA community could do a much better job of communicating the core premise of longtermism without committing itself too much to particular ethical views (e.g., classical utilitarianism) or empirical views (e.g., that animals won’t exist in large numbers and thus are irrelevant to longtermism). I see many of my peers just defer to the values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic. The failure to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.
[Note: it’s quite possible that the syllabus is not completely finished at this point, so perhaps these issues will be addressed. But I think these complaints apply more generally, so I felt like posting this.]
This seems fixable by sending an email to whomever is organizing the syllabus, possibly after writing a small syllabus on s-risks yourself, or by finding one already written.
Hi all, I’m sorry if this isn’t the right place to post. Please redirect me if there’s somewhere else this should go.
I’m posting on behalf of my friend, who is an aspiring AI researcher in his early 20′s, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).
Can you be a bit more specific than “aspiring AI researcher?” Eg, are they interested in AI Safety, are they interested in AI research for other EA reasons, interested in $, interested in AI as a scientific question, etc.
Like many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.
Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden’s aptitudes post, etc.]. I’m still trying to process these ideas and think that the distinction between local and global optimization may help me (and hopefully others) with career planning.
Global optimization involves finding the best among all possible solutions. By its nature, EA is focused on global optimization, identifying the world’s most pressing problems and what we can do to solve them. This technique works well at the community level: we can simultaneously explore and exploit, transfer money between cause areas and strategies, and plan across long timescales. But global optimization is not as appropriate in career planning. Instead, perhaps it is better to think about career choice in terms of local optimization, finding the best solution in a limited set. Local optimization is more action-oriented, better at developing aptitudes, and less time-intensive.
The differences between global and local optimization are perhaps similar to the differences between sequence-based and cluster-based thinking [reference Holden’s post]. Like sequence-based thinking, which asks and answers questions with linear, expected-value style reasoning, global optimization is too vulnerable to subtle changes in parameters. Perhaps I’ve enrolled in a public health program but find AI safety and animal suffering equally compelling cause areas. Suppose I’m too focused on global optimization. In that case, a single new report by the Open Philantrophy Project suggesting shorter timelines for transformative AI might lead me to drop out of my program and begin anew as a software engineer. But perhaps the next day, I find out that clean meat is not as inevitable as I once thought, so I leave my job and begin studying bioengineering.
Global optimization makes us more likely to vacillate between potential paths excessively. The problem, though, is that we need some stability in our goals to make progress and develop the aptitudes necessary for impact in any field. Adding onto this the psychological stress of constant skepticism about one’s trajectory, it seems that global optimization can be a bad strategy for career planning. The alternative, local optimization, would have us look around our most immediate surroundings and do our best within that environment. Local optimization seems like a better strategy if we think that “good correlates with good,” and aptitudes are likely to transfer if we later become convinced that no, really, I should have done this other thing.
I think the difficult thing for us is to find the right balance between these two optimization techniques. We don’t want to fall into value traps or otherwise miss the forest for the trees, focusing too much on our most immediate options without considering more drastic changes. But too much global optimization can be similarly dangerous.
Some criticism of the EA Virtual Programs introductory fellowship syllabus:
I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.
I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitarianism.
What explains this omission? A few possibilities:
The people who created the fellowship syllabus aren’t very familiar with s-risks and the possible relevance of animal advocacy to longtermism.
This seems plausible to me. I think founder effects heavily influence EA, and the big figures in mainstream EA don’t seem to discuss these ideas very much.
These topics seem too weird for an introductory fellowship.
It’s true that a lot of s-risk scenarios are weird. But there’s always some trade-off that we have to make between mainstream palatability and potential for impact. The inclusion of x-risks shows that we are willing to make this trade-off when the ideas discussed are important. To justify the exclusion of s-risks, the weirdness-to-impact ratio would have to be much larger. This might be true of particular s-risk scenarios, but even so, general discussions of future suffering need not reference these weirder scenarios. It could also make sense to include discussion of s-risks as optional reading (so as to avoid turning off people who are less open-minded).
The possible relevance of animal advocacy to longtermism does not strike me as any weirder than the discussion of factory farming, and the omission of this material makes longtermism seem very anthropocentric. (I think we could also improve on this by referring to the long-term future using terms like “The Future of Life” rather than “The Future of Humanity.”)
More generally, I think that the EA community could do a much better job of communicating the core premise of longtermism without committing itself too much to particular ethical views (e.g., classical utilitarianism) or empirical views (e.g., that animals won’t exist in large numbers and thus are irrelevant to longtermism). I see many of my peers just defer to the values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic. The failure to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.
[Note: it’s quite possible that the syllabus is not completely finished at this point, so perhaps these issues will be addressed. But I think these complaints apply more generally, so I felt like posting this.]
This seems fixable by sending an email to whomever is organizing the syllabus, possibly after writing a small syllabus on s-risks yourself, or by finding one already written.
Yeah I have been in touch with them. Thanks!
Hi all, I’m sorry if this isn’t the right place to post. Please redirect me if there’s somewhere else this should go.
I’m posting on behalf of my friend, who is an aspiring AI researcher in his early 20′s, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).
Please message jeffreypythonclass+ea@gmail.com if you’re interested!
You might try the East Bay EA/Rationality Housing Board
Can you be a bit more specific than “aspiring AI researcher?” Eg, are they interested in AI Safety, are they interested in AI research for other EA reasons, interested in $, interested in AI as a scientific question, etc.
Local vs. global optimization in career choice
Like many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.
Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden’s aptitudes post, etc.]. I’m still trying to process these ideas and think that the distinction between local and global optimization may help me (and hopefully others) with career planning.
Global optimization involves finding the best among all possible solutions. By its nature, EA is focused on global optimization, identifying the world’s most pressing problems and what we can do to solve them. This technique works well at the community level: we can simultaneously explore and exploit, transfer money between cause areas and strategies, and plan across long timescales. But global optimization is not as appropriate in career planning. Instead, perhaps it is better to think about career choice in terms of local optimization, finding the best solution in a limited set. Local optimization is more action-oriented, better at developing aptitudes, and less time-intensive.
The differences between global and local optimization are perhaps similar to the differences between sequence-based and cluster-based thinking [reference Holden’s post]. Like sequence-based thinking, which asks and answers questions with linear, expected-value style reasoning, global optimization is too vulnerable to subtle changes in parameters. Perhaps I’ve enrolled in a public health program but find AI safety and animal suffering equally compelling cause areas. Suppose I’m too focused on global optimization. In that case, a single new report by the Open Philantrophy Project suggesting shorter timelines for transformative AI might lead me to drop out of my program and begin anew as a software engineer. But perhaps the next day, I find out that clean meat is not as inevitable as I once thought, so I leave my job and begin studying bioengineering.
Global optimization makes us more likely to vacillate between potential paths excessively. The problem, though, is that we need some stability in our goals to make progress and develop the aptitudes necessary for impact in any field. Adding onto this the psychological stress of constant skepticism about one’s trajectory, it seems that global optimization can be a bad strategy for career planning. The alternative, local optimization, would have us look around our most immediate surroundings and do our best within that environment. Local optimization seems like a better strategy if we think that “good correlates with good,” and aptitudes are likely to transfer if we later become convinced that no, really, I should have done this other thing.
I think the difficult thing for us is to find the right balance between these two optimization techniques. We don’t want to fall into value traps or otherwise miss the forest for the trees, focusing too much on our most immediate options without considering more drastic changes. But too much global optimization can be similarly dangerous.