One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.
MHarris
Creepy Crawlies (an EA poem)
My main reaction (rather banal): I think we shouldn’t use an acronym like IBC! If this is something we think people should think about early in their time as an effective altruist, let’s stick to more obvious phrases like “how to prioritise causes”.
I’ve always thought the Repugnant Conclusion was mostly status quo bias, anyway, combined with the difficulty of imagining what such a future would actually be like.
I think the Utility Monster is a similar issue. Maybe it would be possible to create something with a much richer experience set than humans, which should be valued more highly. But any such being would actually be pretty awesome, so we shouldn’t resent giving it a greater share of resources.
For those who haven’t already read it: Ben Kuhn on startups serving emerging markets
This is a discussion that has happened a few times. I do think that ‘global priorities’ has already grown as a brand enough to be seriously considered for wider use, and perhaps even as the main term for the movement.
I’d still be reluctant to ditch ‘effective altruism’ entirely. There is an important part of the original message of the movement (cf pond analogy) that’s about asking people to step up and give more (whether money or time) - questioning personal priorities/altruism. I think we’ve probably developed a healthier sense of how to balance that (‘altruism/life balance’) but it feels like ‘global priorities’ wouldn’t cover it.
I’m all for focusing on the power of policy, but I’m not sure giving up any of our positions on personal donations will help get us there.
3 months late, but better than never: it’s incredibly inspiring to see how the community has grown over the past decade.
This sounds a lot like a version of preference utilitarianism, certainly an interesting perspective.
I know a lot of effort in political philosophy has gone into trying to define freedom—personally, I don’t think it’s been especially productive, and so I think ‘freedom’ as a term isn’t that useful except as rhetoric. Emphasising ‘fulfilment of preferences’ is an interesting approach, though. It does run into tricky questions around the source of those preferences (eg addiction).
I don’t mind rhetorical descriptions of China as having ‘less economic and political freedom than the United States’, in a very general discussion. But if you’re going to make any sort of proposal like ‘there should be more political freedom!’ I would feel the need to ask many follow-up clarifying questions (freedom to do what? freedom from what consequences? freedom for whom?) to know whether I agreed with you.
Well-being is vague too, I agree, but it’s a more necessary term than freedom (from my philosophical perspective, and I think most others).
I wonder if there would be a strong difference between “What do you think of a group/concept called ‘effective altruism’”, “Would you join a group called ‘effective altruism’”, “What would you think of someone who calls themselves an ‘effective altruist’”, “Would you call yourself an ‘effective altruist’”.
I wonder which of these questions is most important in selecting a name.
On this theme, I was struck by the 80,000 hours podcast with Tom Moynihan, which discussed the widespread past belief in the ‘principle of plenitude’: “Whatever can happen will happen”, with the implication that the current period can’t be special. In a broad sense (given humanity’s/earth’s position), all such beliefs were wrong. But it struck me that several of the earliest believers in plenitude were especially wrong—just think about how influential Plato and Aristotle have been!
Excession, Surface Detail and The Hydrogen Sonata are the three I’d recommend from a longtermist perspective.
Consider Phlebas is (by some margin) the worst novel in the series. It’s a shame it seems like the obvious place to start.
I’m certain EA would welcome you, whether you think AI is an important x-risk or not.
If you do continue wrestling with these issues, I think you’re actually extremely well placed to add a huge amount of value as someone who is (i) ML expert, (ii) friendly/sympathetic to EA, (iii) doubtful/unconvinced of AI risk. It gives you an unusual perspective which could be useful for questioning assumptions.
From reading this post, I think you’re temperamentally uncomfortable with uncertainty, and prefer very well defined problems. I suspect that explains why you feel your reaction is different to others’.
“But I find it really difficult to think somewhere between concrete day-to-day AI work and futuristic scenarios. I have no idea how others know what assumptions hold and what don’t.”—this is the key part, I think.
“I feel like it would be useful to write down limitations/upper bounds on what AI systems are able to do if they are not superintelligent and don’t for example have the ability to simulate all of physics (maybe someone has done this already, I don’t know)”—I think it would be useful and interesting to explore this. Even if someone else has done this, I’d be interested in your perspective.
How much did the $13 million shift the odds? That’s the key question. The conventional political science on this is skeptical that donations have much of an effect on outcomes (albeit it’s a bit more positive about lower profile candidates like Carrick) https://fivethirtyeight.com/features/money-and-elections-a-complicated-love-story/
(In this case, given the crypto backlash, it’s surely possible SBF’s donations hurt Carrick’s election chances. I don’t want to suggest this was actually the case, just noting that the confidence interval should include the possibility of a negative effect, here.)
Signaling is a more interesting idea, but raises more questions about effectiveness. How much is it worth spending to get someone elected on the basis that they’ve endorsed pandemic prevention for self-interested reasons?
Thanks for sharing your talk.
I’m at the UK’s Competition and Markets Authority. Very happy to talk to anyone about the intersection of competition policy and AI.
I think, in general, personal consumption decisions should be thought in the context of moral seriousness (see Will MacAskill’s comments in recent podcast).
Should we take seriously efforts to avoid unnecessary emissions? Yes! Is EA doing this? I’m not sure. My impression is that EAs are fairly likely to avoid unnecessary flights, take public transport etc—that’s the attitude I take myself, anyway. This is less unusual than veganism—the thoughtful Londoners I’m surrounded by do the same. So I think it would be easy to underestimate the extent to which EAs do this, just because it’s less noteworthy.
EAs also fly to conferences which have air conditioning. Is this worth it? Anecdotally, a lot of good seems to emerge from in-person conferences. And air-conditioning is important for thinking and learning. So I think we’re probably in the right place here, but I’d be interested in a more detailed look at this question.
Should EAs reduce their emphasis on personal meat/dairy/egg consumption? Should they increase their emphasis on their personal carbon footprint?
I think the answer is probably a bit of both.
I strongly doubt there is truly a trade-off here—I don’t think veganism is an especially emphasised aspect of EA, and if there is a strong case for specific changes in personal emissions consumption, I think this could be advocated on its own merits and in addition to veganism.
This seems like a major success in influencing US policy.
This book is a core text on this subject, which explicitly considers when specific agencies are effective and motivated to pursue particular goals: https://www.amazon.co.uk/Bureaucracy-Government-Agencies-Basic-Classics/dp/0465007856
I’m also reminded of Nate Silver’s interviews with the US hurricane forecasting agency in The Signal and the Noise.