Questions about Effective Altruism in academia (RAQ)
I have many questions about being an academic effective altruist, and very few answers, maybe others have those.
If you too have questions, some of them may be answered here:
https://80000hours.org/career-guide/top-careers/profiles/valuable-academic-research/
https://80000hours.org/topic/careers/in-research/academic-research/
If not, let’s talk about them on the comment section and bootstrap our academic effectiveness.
- 30 Dec 2016 16:09 UTC; 0 points) 's comment on Futures of altruism special issue? by (
Broadening it out a little, many EA organisations (at the very least GiveWell, CEA and Leverage Research) are heavily research-focused, and in some cases founded and staffed by people who were on the academic track and wanted to be academics or researchers. So it’s worth considering them at the same time, partly as a related alternative which will appeal to some of those interested in this thread.
I’ve wondered lately whether there could be a future for crafty philosophers to carve out a niche in EA. Many economists of late have been able to utilize their skills to do things that, on their face, do not appear to be economics as traditionally conceived. There seems to be a similar opportunity in philosophy, given the skill set that philosophers bring to the table. However, I think the greatest difficulty would be the fact that philosophy (and philosophers) have typically been interested in advancing specific discussions about particular philosophical problems. In a sense, the goal is to discover what’s true. In that light, EA simply isn’t as interesting to many philosophers. It doesn’t seem to pose great unanswered philosophical questions. The premise of EA is relatively simple, and it seems that, given many philosophers sympathies toward consequentialism, it may be met with wide acceptance. However, that doesn’t leave anything to discuss philosophically, which is an obvious problem for philosophy as traditionally conceived. However, I see no good reason why at least some philosophers should not branch out in their line of work and become interested in a sort of “everyman” ethics. This wouldn’t involve as much discussion about what is right, but would instead involve persuasion. I certainly think there’s room in the field for it, and given the fact that many departments are on a sort of “justification treadmill” in having to justify their existence to the heads of their university and the general public, it may be exactly what the field needs.
I have actually heard some moral philosophers lament about how when people get sick, they call a doctor, when their car has problems, they call a mechanic, etc. When they have a moral predicament, however, no one calls a moral philosopher. It seems to me that EA is a perfect platform to be advanced by philosophers, and that at least some philosophers might welcome the opportunity. The question that needs to be answered is whether this can be done by a philosopher who is trying to build a career, or if it must be relegated to guys like Singer who already have successful careers.
I understand The Centre for Effective Altruism (TGPP, GWWC, e.t.c.) does and has done a lot of philosophical and methodological research, so you might want to talk to them.
My hunch is that most useful philosophcal discussion is already being had by FHI and that there are going to be lower hanging fruit elsewhere. It’s not just Bostrom there who is smart. People like Stuart Armstrong and Toby Ord, and also Will MacAskill, who is affiliated with the larger Oxford university… these kind of people think similarly to most EAs and would think of most of the things that I would think of, if I was in that field.
So I think my competitive advantage has to lie in some other skill that people like Toby and Will can’t easily get. This means that technical fields, including machine learning and genomics are now a lot more exciting to me.
I’m not sure how much I believe this reasoning. FHI does a great job attacking neglected problems, but they are a tiny number of people in absolute terms, and there are a lot of important questions they’re not addressing.
That’s not to say that your competitive advantage doesn’t include technical skills, but I’m not sure that the presence of a handful of people could reasonably push the balance that far (especially as there are also several people with EA sympathies in a variety of technical fields).
There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.
Is it worthwhile to teach a class on effective altruism as a special course for undergrads? I recently saw a GWWC pledge party where many new undergrads had decided to take the pledge after a few months taking one. Though selection effects might have been a large part of it.
I’ve wondered this myself—could be very high impact given the multiplier effect. On the other hand, in any career you have advocacy and influence potential, and not every department and university will necessarily let you offer this course.
Is an academic’s influence more important in creating new EAs or on the research they do themselves?
Alternatively, is trying to influence a field and cause other researchers to be more interested in useful research possible, how?
My hunch is that for people who are in the top 5% of EA researchers, most of the impact will flow from people building on your actual discoveries, or maybe from noticing how excellent your research is, whereas people who are below that might want to make more conscious efforts toward building institutional allies, working on journals, being politically engaged, et cetera.
I’m starting a PHD in soon. That gives me freedom to research a large plethora of topics, some of which more valuable then others, ranging from information theory and the nature of information to the mathematics of altruism, and the formation of Singletons. In between all sorts of questions about genetic determination and behavioral genomics are allowed, as well as primatology. My current contention is to research potential paths for altruism in the future, where it can lead in naïve evolutionary models and in less naïve evolutionary models. I will also do research with other EAs on how to impart moral concepts to artificial general intelligences. Are there better counterfactual alternatives?
What field are you studying? It seems like biology, but not sure. I’m planning on applying to programs in economics. Interested in cause prioritization and studying the interplay of social networks with economic decisions. Interested in seeing the decision you make.
Biological Anthropology, with an adviser whose latest book is in philosophy of mind, the next book on information theory, the previous book on—of all things—biological anthropology, and most of his career was as a semioticist and neuroscientist. My previous adviser was a physicist in the philosophy of physics who turned into a philosopher of mind. My main sources of inspiration are Bostrom and Russell, who defy field borders. So I’m basically studying whatever you convince me makes sense in the intersection of interestingly complex and useful for the world. Except for math, code and decision theory, which are not my comparative advantage, specially not among EAs.
Thanks for asking for suggestions. :)
I would tend to focus on AGI-related topics, though you may have specific alternate ideas that are compelling for reasons that I can’t see from a distance. In addition to AGI safety, studying political dynamics of AGI takeoff (including de novo AGI, emulations, etc.) could be valuable. I suggested a few very general AGI research topics here and here. Some broader though perhaps less important topics are here.
What are the unsolved problems related to infinite ethics that might be worth tackling as an academic? Some relevant writings on this topic to see what the field looks like
I would ask Amanda MacAskill.
On the topic of interesting questions about large utilities, I think could conceivably be useful to analyse the notion that it might be good to refuse one ‘Pascal’s Mugging’ proposition in order to pursue other ones, and to see what this might imply, as well as to analyse how mugging works out in game theory or decsion theory to decide whether its adversarial nature is important.
Are there tips for undergrads and graduate students on how to skip classes or do less during classes via mentioning that they are EAs and what they intend to accomplish with their time when classes don’t align with their goals?
Would it be valuable to have edited books on many fields which are useful for EAs?
Things like Philosophy of Mind for Computationalists
Mega course for aspiring philosophers
Etc… but edited in a book. Like these here, here, and here.
Philosophy for EAs
Mathematics for EAs, a guide for the perplexed
Anthropology for EAs etc…
I tend to think there are already tons of lists of important reading out there, and I’m not sure an EA-specific one would be particularly novel. It might be more valuable to add small summaries of these references on Wikipedia where they aren’t already present, since reading a summary on Wikipedia is much faster than reading the entire original article. Doing it on Wikipedia also helps avoid reinventing the wheel if someone else has already written a summary.
Papers X Books
Long ago I had decided against writing papers. I had written some 4 papers, of mediocre writing and decent idea quality. I decided against it for two reasons, one is that anything I publish on lesswrong.com or effective-altruism.com will be read by hundreds of people within 24 hours of publication.
The other is that Paper reading and citation follow a power law distribution: http://arxiv.org/abs/1402.3890
With median number of readers being around 10 (that is about 100 times less than overall reading of lesswrong.com and maybe 50 less than effective-altruism.com)
Add to that the enormous cost of writing papers, having them reviewed, the randomness involved, and the 3 year long cost of publication, and you have sure a recipe for me not wanting to do it.
I think there are fields like math and times like before 1970 where the truth was a sufficiently strong attractor that the terrible hedging writing of the sort used in academic papers could become prominent as long as it was true. That however was before TV Internet, marketing fads, blogs, and soon google glasses and virtual reality. The truth is still an attractor, but far less strong in comparison now to aesthetics, marketing and other variables. So we have to add those adornments to true ideas, or they will perish. Also if my ideas are all wrong, I want them to be known by enough people that they can notice, give me feedback and make me change my mind. I care about consequences. I want my ideas to be consequential, because this has positive feedback, which increases probability that I’ll hit the truth over the long run. And if I don’t, someone else will. Writing papers will give me none of that.
Books however seem to be a different beast. First of all they are long enough to convey interesting ideas (of a more philosophical type) second, though they also follow exponential distributions, there are several marketing strategies that can aid publication and increase number of readers. They can be, like papers and unlike blogs, cited as decent academic evidence in good standing.
They can be monetized as well, whereas one must pay to publish papers.
In 2009 I have thus convinced myself that writing books is superior to writing papers, so I stopped writing papers and wrote a book.
Two top academics (Bostrom and Deacon) recently recommended to me to write papers, I decided to revisit the case for papers versus books.
So it’s 2015. On all accounts it seems to me that the case for papers is worse today than in 2008.
The power law continues for papers. Publishing them continues to be costly. Not paying to have them public is shooting oneself on the foot. Due to Kindle etc, authors have more control and a greater share of the money that goes for books. Though less money goes to printed books, more goes to authors. Paper’s usage decays over time for most authors, specially on empirical matters. Even papers by authors as brilliant as Hilary Putnam as less read today than before. Books can be eternalized. People still read “word and object”.
With exceptions that publish papers on places like arxiv.org (Garrett Lisi, Tegmark, Tononi) I don’t know who rose to intellectual prominence via papers in the last many years.
I am glad to hear counterarguments, because as it stands, this seems like the ultimate no-brainer, there is not a single thing papers are better than books for.
Number of readers
Conveying complex ideas
Getting feedback on ideas
Money
Career prospects
Author’s name being remembered and sought after
Resilience over long stretches of time
Compatibility with Technological advances (current and expected)
Odds of finding collaborators in virtue of having written them.
The few properties like “peer feedback” and “short and easier to write” that papers beat books at have are completely dominated by blogging, specially on public science or philosophy blogs.
My current evaluation is that in 2008 writing books was substantially better than papers, in 2015 I think writing books and not papers is the secret sauce of being a successful academic.
But I’m happy to be convinced otherwise if you think the arguments above don’t hold, or there are even stronger ones I forgot to consider that dominate over them all.
Is it possible to successfully publish philosophy books if you are not widely published in journals? My suspicion is that it would be very difficult. It depends, of course, on what type of books you aim to publish. If they are directed at the philosophical community, there will likely be widespread confusion as to why you did not first publish your ideas in a paper so that you could receive criticism and have the opportunity to really work through the arguments against your positions. It would be very odd indeed for a philosopher to write books directed at professional philosophers if he never publishes papers.
However, if you aim to write for the popular audience, that concern may not hold any weight. I would be curious to know, however, whom you see as your future employer? If you are going to be working for a research institution, there will almost certainly be a requirement that you publish your work in journals. It may be possible, however, to work for a teaching institution and have a minimal publishing requirement, thus the majority of your writing could be done in the form of books for the popular audience. You may still have difficulty gaining credibility before publishers, however.
Edit: I just realized that you never actually said your field was philosophy. So, if it is another field, take my post lightly.
More important than my field not being philosophy anymore (though I have two degrees in philosophy, and identify as a philosopher) the question you could have asked there is why would you want a philosophical audience to begin with? Seems to me there is more low hanging fruits in nearly any other area in terms of people who could become EAs. Philosophers have an easier time doing that, but attracting the top people in econ, literature, visual arts and others who may enjoy reading the occasional public science books is much less replaceable.
This line of reasoning seems strong, but seriously conflicts with academics’ current behaviour, so I’d be interested to see if they have much useful to say in response.
I am an academic and I have published about 50 papers and one book. I think the book was helpful in getting more popular media articles. However, my impression is that the typical academic book only sells a few hundred copies. One could hope for a few more reads than this because of libraries, but some people buy a book and do not read it. As for papers, there is a big difference between conference and journal. I could believe that the median paper only get zero citations because the median paper is a conference paper. However, even relatively low-ranked journals have an impact factor of one, which means an average of five citations over five years. And if you think about it, if the average paper has 30 references, that means that the average paper also get cited 30 times. And therefore the average journal article would be cited significantly more. Furthermore, in my experience on sites like Research Gate and Academia.edu, papers typically have one-two orders of magnitude more reads than citations. So I would guess the average journal article gets read thousands of times. I think this is significantly more than the typical academic book or LessWrong. But I don’t have a lot of hard data here, so I am happy to update.
As for being successful in academia, I think only doing books would be very risky. People pay some attention to total citations, and this would likely be higher with more pieces of work (papers versus book). Also the h index is the number of publications that have been cited at least that number of times (see Bostrom’s). One rule of thumb is achieving and h index of 12 in order to get tenure. Publishing significantly more than 12 books before tenure sounds difficult. Maybe evaluators would change their metrics, but I would not count on it.
I am not considering what Bostrom/Grace/Besinger/ do philosophy strictu sensu in this question.
After repleaceability considerations have been used at Ben Todd and Will Mac Askill’s theses at Oxford, and Nick Beckstead made the philosophical case for the far future, is there still large marginal return to be had on doing research on something that is philosophy strictu sensu?
I ask this because my impression is that after Parfit, Singer, Unger, Ord, Mac Askill and Todd we have run out of efforts that have great consequential impacts in philosophical discourse. Not because improvements cannot be made, but because they would be minor in relation to using that time for other less strictu sensu endeavours.
Personally, I wouldn’t feel bad if we left technical philosophy to MacAskill and Ord for a while, as they’re surely going to keep doing it. But maybe you want to get professorship somehow, and if so, then your choices are reduced.
I’ve left the field of philosophy (where I was mostly so I could research what seemed interesting and not what the university wanted, as Chalmers puts it “studying the philosophy of x” where x is what interests me at any time) and am now in biological anthropology. It seems that being a professor in non-philosophy fields is much easier than in philosophy, from my many years researching the topic. Also switching fields between undergrad and grad school is easy, in case someone reading this does not know.
Interesting. I’m sure you could carve out an interesting niche in that area. One immediately obvious issue in that area is how modern humans use various environmental resources. More distantly relevant issues that you might still be closer to than any other current EA academics would be different examples of the culture of science, or innovation, and human views on our place in relation to evolution, including transhumanism. I’m sure there are others.
It seems like you may have more insight than anyone else on whether you should go into philosophy. If you have a high-impact idea or set of ideas that you think you can contribute, perhaps you can go in. Before MacAskill and Ord, I don’t think many people thought there was an applicable and useful argument to be made in philosophy, but they proved that wrong.
The returns of philosophy are often not seen immediately. It was philosophers, after all, who brought us consequentialism. Without consequentialism, it is doubtful that EA exists. However, there is still a remaining question as to whether future philosophical breakthroughs will have as wide of an impact or if the insight will be far more technical in nature.