Community Organiser for EA London
There is also an Airtable version of that directory that is more up to date, I’ll update the google sheet
Is engagement the thing you want to optimise for over impact or are the two highly correlated for you?
I don’t think I’ve ever called myself an effective altruist, part of it is the small identity idea mentioned in the original post and another part is that it doesn’t seem correct to call myself effective when there are large uncertainties about the prioritisation of causes and interventions, so new evidence could come up showing I was actually very ineffective.
On a more practical level, it’s easier to have conversations with people who are newer to EA or are sceptical of certain aspects of it when I’m not calling myself an EA and making it seem like something you are either in or out of.
It’s also probably easier to find flaws in a topic when it isn’t part of your identity, it reduces the chance of defensiveness, and I think I should try and make it easy to always be open to potential problems in EA.
Generally for most engagement there is a vast discrepancy between viewers, people who interact and people who comment/post.
1% rule—link with more details.
It’s great to see your intro, if you’re interested there is a group on Facebook for disabled and chronically ill people interested in EA. There are also some other groups mentioned on this directory here that you may find useful.
Leopold Aschenbrenner has written about this here.
“The same technological progress that creates these risks is also what drives economic growth. Does that mean economic growth is inherently risky? Economic growth has brought about extraordinary prosperity. But for the sake of posterity, must we choose safe stagnation instead? This view is arguably becoming ever-more popular, particularly amongst those concerned about climate change; Greta Thunberg recently denounced “fairy tales of eternal economic growth” at the United Nations.
I argue that the opposite is the case. It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.
We might indeed be in “time of perils”: we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk. Eventually, a nuclear war or environmental catastrophe would doom humanity regardless.
Faster economic growth could initially increase risk, as feared. But it will also help us get past this time of perils more quickly. When people are poor, they can’t focus on much beyond ensuring their own livelihoods. But as people grow richer, they start caring more about things like the environment and protecting against risks to life. And so, as economic growth makes people richer, they will invest more in safety, protecting against existential catastrophes. As technological innovation and our growing wealth has allowed us to conquer past threats to human life like smallpox, so can faster economic growth, in the long run, increase the overall chances of humanity’s survival.
This argument is based on a recent paper of mine, in which I use the tools of economic theory—in particular, the standard models economists use to analyze economic growth—to examine the interaction between economic growth and the risks engendered by human activity.”
Does this include how it might limit your ability to move for work, which might be the most important factor in salary/impact?
Could you turn that google doc into a post Sam?
I think it would be valuable to share with others how someone has thought about their morals.