I primarily write academic papers and do outreach through my blog. I do try to post here when possible (and I always appreciate cross-posts!), but please do check dthorstad.com for my academic papers and reflectivealtruism.com for outreach.
David Thorstad
Thanks for the kind words, Jamie!
I always appreciate engagement with the blog and I’m happy when people want to discuss my work on the EA Forum, including cross-posting anything they might find interesting. I also do my best to engage as I can on the EA Forum: I posted this blog update after several EA Forum readers suggested I do it.I’m hesitant to outright post my blog posts as EA Forum posts. Although this is in many senses a blog about effective altruism, I’m not an effective altruist, and I need to keep enough distance in terms of the readership I need to answer to, as well as how I’m perceived.
I wouldn’t complain if you wanted to cross-post any posts that you liked. This has happened before and I was glad to see it!
Thanks mhenric! Those are both good papers to consider and I’ll do my best to address them.
I didn’t know the “But is it altruism” paper. Please do send it when it is out—I’d like to read it any hopefully write about it.
Interesting! I think this should be manageable. Would people listen to this?
Thanks Jason! And yes, I’m a southern boy. Vandy is just what I was looking for. I appreciate the kind words and your continued readership.
Thanks mhendric! I appreciate the kind words.
The honest truth is that prestige hierarchies get in the way of many people writing good critiques of EA. For any X (=feminism, marxism, non-consequentialism, …) there’s much more glory in writing a paper about X than a paper about X’s implications for EA, so really the only way to get a good sense of what any particular X implies for EA is to learn a lot about X. That’s frustrating, because EAs genuinely want to know what X implies for EA, but don’t have years to learn.
Some publications (The good it promises volume; my blog) aim to bridge the gap, but there are also some decent papers if you’re willing to read full papers. Papers like the Pettigrew, Heikkinen, and Curran papers in the GPI working paper series are worth reading, and GPI’s forthcoming longtermism volume will have many others.
In the meantime … I share your frustration. It’s just very hard to convince people to sit down and spend a few years learning about EA before they write critiques of it (just like it’s very hard to convince EAs to spend a few years learning about some specific X just to see what X might imply for EA). I’m not entirely sure how we will bridge this gap, but I hope we do.
I’ll try to write more on the regression to the inscrutable and on AI papers. Any particular papers you want to hear about?
Thanks Vasco! I appreciate your readership, and you’ve got my view exactly right here. Even a 1% chance of literal extinction in this century should be life-alteringly frightening on many moral views (including mine!). Pushing the risk a fair bit lower than that should be a part of most plausible strategies for resisting the focus on existential risk mitigation.
Thanks Milena! Let me know what you think.
Thanks Devin! Let me know what you think
Thanks :)
Blog update: Reflective altruism
Often, to be honest, it goes the other way. The average engaged EA knows a tremendous amount about EA, whereas many educated readers (including academics, who are another key part of my audience) know relatively little.
I guess one key audience of mine is academic philosophers. This audience often wants to see discussions of philosophical issues in population ethics, decision theory, and the like at a level that assumes quite a high level of background (often, alas, more than I have!).
I think in practice I often don’t provide the second audience (academics, especially philosophers) with as much content as I’d like for them, and I’m trying to do what I can to grow my audience a bit more evenly.
Thanks Vasco! Much appreciated. I really appreciate your comment, and you’re not the first one who said it.
I’m going to post a blog update next month, and I’ll try to post some updates going forward after that.
I try to make sure I have enough independence from EAs that I can speak my own mind without having to change my views or how I say things, or what I assume as background and so on. That means I don’t usually want to directly post to the EA Forum, but I’d be thrilled if someone else wanted to crosspost some/any/all posts.
David
Agreed, and will do!
Here’s a proposed change to their plans: dump the focus on AI safety. This kind of change can only come from independent scrutiny.
It would be more likely to find the truth
How about a blog update in a month or so about the post series I’ve written so far, lessons learned, and future directions, posted to the EA Forum?
Thanks Richard—I’ve appreciated your comments on the blog!
Thanks! I always appreciate engagement and would be very happy to see any of my posts discussed on the EA Forum, either as linkposts or not.
I need a bit more independence than the EA Forum can provide. I want to write for a diverse audience in a way that isn’t beholden primarily to EA opinions, and I want to be clear that while much of my blog discusses issues connected to effective altruism, and while I agree with effective altruists on a great many philosophical points, I am not an effective altruist.
For that reason, I tend not to post much on the EA Forum, though I do try to comment when I can. I’d be happy to comment at least to some degree on any linkpost, and I’m always very responsive to comments on my blog.
I appreciate this can be a bit frustrating, but I need to be clear about who I am and who my audience is.
It is sometimes hard for communities with very different beliefs to communicate. But it would be a shame if communication were to break down.
I think it is worth trying to understand why people from very different perspectives might disagree with effective altruists on key issues. I have tried on my blog to bring out some key points from the discussions in this volume, and I hope to explore others in the future.
I hope we can bring the rhetoric down and focus on saying as clearly as possible what the main cruxes are and why a reasonable person might stand on one side or another.
I really liked and appreciated both of your posts. Please keep writing them, and I hope that future feedback will be less sharp.