I had a similar thought to Shiny. Am I correct that an agent following your suggested policy (“‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’ ”) would never *appear* to violate completeness from the perspective of an observer that could only see their decisions and not their internal state? And assuming completeness is all we need to get to full utility maximization, does that mean an agent following your policy would act like a utility maximizer to an observer?
Nick_Anyos
Podcast Interview with David Thorstad on Existential Risk, The Time of Perils, and Billionaire Philanthropy
Thank you for this super kind comment! ^_^
Thank you for both comments! :)
Personally I feel exhausted by the last few months of what I felt like was much some firestorm of angry criticism. Much of it, mainly from the media and Twitter, feels like it was very antagonistic and in poor taste. At the same time, I think our movement has a whole lot of improvement to do.
I feel the same. Hopefully with this podcast I can increase the percentage of EA criticisms that is constructive and fun-to-engage-with.
My guess is that 70%+ of critiques are pretty bad (as is the case for most fields). I’d likewise be curious about your ability to push back on the bad stuff, or maybe better, to draw out information to highlight potential issues. Frustratingly though, I imagine people will join your podcast and share things in inverse proportion to how much you call them out. (This is a big challenge podcasts have)
I agree, although I think that some subset of the low quality criticism can be steel manned into valid points that may not have come up in an internal brainstorming session. And yes I am still experimenting with how much push back to give, and the first and second episodes are quite different on that metric.
Similarly, I don’t feel like the argument brought forth against the use of the word “aligned” when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it’s really easy to error on the side of “overfit on specific background beliefs” or “underfit on specific background beliefs”, and tricky to strike a balance.
I think this is fair, and I honestly don’t have a good solution. I think the word “aligned” can point to a real and important thing in the world but also has the risk of in practice just being used to point to the ingroup.
Thank you for your comment and especially your guest recommendations! :)
Note that saying “this isn’t my intention” doesn’t prevent net negative effects of a theory of change from applying. Otherwise, doing good would be a lot easier.
I completely agree. But I still think that saying when a harm was unintentional is an important signaling mechanism. For example, if I step on your foot, saying “Sorry, that was an accident” doesn’t stop you from experiencing pain but hopefully prevents us from getting into a fight. Of course it is possible for signals like this to be misused by bad actors.
I also highly recommend clarifying what exactly you’re criticizing, i.e. the philosophy, the movement norms or some institutions that are core to the movement.
Ideally all of the above, with different episodes focusing on different aspects. Though I agree I should make the scope of the criticism clear at the beginning of each episode. I think the Ozzie’s comment below has a good break down that I may use in the future.
New EA Podcast: Critiques of EA
Towards Donor Coordination Via Mechanism Design
Hi! My name is Nick and i I have been reading articles on the EA forum since it started and finally got around to making an account today.
I first became aware Effective Altruism when I was 17 (2011). I had been working a summer job and wanted to know what was the best charity to donate some of the money too. Through that search I found GiveWell and became very interested effective charities. A year or two later (around 2013) I came across Less Wrong and read the sequences. Through Less Wrong I found many other places to learn about Effective Altruism and over time got more and more interested in it.
I live in Australia and will be attending EA Global: Melbourne in august. A few EA friends and I recently started EA Canberra and so far meetings have been going really well. I have a blog where I write about Effective Altruism, veganism and other topics. I am looking forward to engaging more with the Effective Altruism community in the future.
(Just wanted to add a counter datapoint: I have been a local community organizer for several years and this has not been my experience.)