Another spiritual EA predecessor is the Efficiency Movement . Focusing on improving the efficiency of existing institutions to improve public welfare, EM had a lot of support from philanthropists like Rockefeller and Carnegie, politicians like Theodore Roosevelt and Herbert Hoover and other notable figures like Brandeis and Bedeux. While not a 1-to-1 comparison, I think the focus on methodology and process analysis to maximise societal good across diverse cause areas appealed to the same demographic.
It’s like if EA managed to secure the backing of all the FAANG founders, several POTUSes and broad support in activist organisations. Apparently, it did inspire business process innovations, modern conservationism and antitrust initiatives, tackling major issues of the time.
I can’t find info on the “downfall” of EM. It seems to have just faded out of the public spotlight postwar. Perhaps it focused too much on few influential decision makers instead of appealing to the public. Like how Elon Musk, Vitalik Buterin and SBF are extremely supportive of EA, but the public doesn’t associate any of them as “EA public figures” unless they actively look.
I mean, SpaceX, OpenAI, Neuralink and Tesla all have goals ostensibly related to x-risk and longtermism, and hve made strides in those areas. I can’t think of many billionaires running one x-risk/longtermist focused company, let alone several.
Thanks for the response! I agree that he’s a longtermist philosophically, or at least acts this way. I think he’s not a “community EA” per se. For example, he founded OpenAI despite strong opposition from this community. He kicked Igor out of his Foundation. And also the object-level of his ideas for improving the future aren’t very good (though I like Boring Company, I’m a big fan of digging holes).
Well … I’m not surprised, although I did not specifically know there was bad blood there. Were people reacting negatively to monetizing Pandora’s Lootbox, so to speak?
Also, I went down the rabbit hole of your profile for a bit; I’m broadly curious what interesting megaprojects you’d personally like to see attempted? I’m risk-taking, bored and like to think I’m entrepreneurial, so I’d like to take a wack at some speculative ideas that don’t exist yet.
edit: bruh i saw the edit, now i have to edit
Well honestly, I think EAs on an individual+philosophical level disagree more than they publicly show. Just meeting actual EAs, I see the occasional clash between neartermism vs longtermism, messaging, community building, animal vs human prioritisations. Of course this hasn’t spiral into anything bad, but it’s there. I think if every EA was billionaire, we’d see more awkward distancing as everyone pushes forward with their own thing. I don’t think Bill Gates actively associates with AMF, Givewell or EA even though the Gates Foundation is actively working to eradicate malaria among other illness.
As for the object-level, yeah some are weird, but the net benefit is still great. I chalk it down to duds being inevitable when you take that many risks. For comparison, Steve Jobs certainly drove the consumer electronics industry forward, but he oversaw plenty of stupid duds that almost destroyed Apple. And he almost named the MacIntosh the “Bicycle”.
Sorry, I often edit my comments after a first pass when I realized I had something important to add. I usually don’t warn people if I do it within a few hours after commenting.
Were people reacting negatively to monetizing Pandora’s Lootbox, so to speak?
I’m not sure exactly what you’re referring to, but people didn’t like the AGI meme being spread very widely, and OpenAI’s Theory of Change was pretty bad, even if you take Elon Musk’s comments at face value. Elon viewed artificial general intelligence as “summoning a demon” and inexplicably wanted to democratize it. This is a bad theory of change because democraticizing demon-summoning is not a good way to make sure the world is not overrun by demons. OpenAI is relatively more circumspect now but many other groups have jumped on the AGI hype train. It’s not clear to me whether this is inevitable vs. accelerated by the creation of OpenAI + impressive demos from them.
I’m broadly curious what interesting megaprojects you’d personally like to see attempted?
Thank you so much for your interest! Most projects I’m excited about are pretty high-context. Are you going to EAGx Singapore? I’m not planning to go myself but I can see if any of my coworkers will be excited to shill for potential longtermist megaprojects there.
I’m volunteering there at EAGx, since I’m a matriculating student at the university itself!
In general, I’ve just been trawling around for more interesting projects to work on that have high uncertainty and require various skillsets. I get very uninspired with normal school/academic stuff, and I’m noticing other people can climb the career ladder better than I can so might as well lean into building. So I’m just starved for ideas.
Ventilation and sanitation projects that pushes the envelope on either protection or cost.
Getting better at forecasting and/or forecasting adjacent things
An example of a forecasting-adjacent thing is organizing a team of forecasters to work on specific problems
Another example is creating data pipelines to help forecasters forecast better/faster
Cheaper/faster versions of any of the above, at the potential expense of quality.
The trick is not hogging up/monopolizing the space, so it’s easy for others to move in.
High-quality translation of EA, longtermist, or rationality materials into local languages.
Doing a kickass job at university group or other community building.
Joining an existing EA or longtermist-oriented org working on important longtermist priorit.
Please note that while these things are all directly valuable, most of the value in doing them is a combination of skill-building, network building, and helping you to orient yourself to doing useful EA work. Also please note that I’m just one generalist researcher who has very briefly thought about each of this problems, not an expert on any of these problems and certainly not an expert on your own career options.
The community building part doesn’t surprise me, I’ll read stuff other people have written, unless there’s an unpopular take you have that you’d like to mention. And of course shotgunning program and grant applications
I’m interested in the civilisational collapse thing. I always considered it a valid and meaningful area of investment. I’m curious how you’d recommend someone early in their career get in on this? Because I always implicitly assumed “the government was handling it”, and that you’d need 30 years of civil service experience before getting to do anything.
What would you recommend to get started on forecasting? I have a fairly above-average track record especially for unusual occurences, and would not mind doing it as a hobby or full time. My issue was that I hated seeing decision makers ignore my findings/predictions, hence why I first went so hard into advocacy. I struggle to see how forecasting could go mainstream, but I find I’m earlier to trends than I expect.
I’m interested in the civilisational collapse thing. I always considered it a valid and meaningful area of investment. I’m curious how you’d recommend someone early in their career get in on this?
E.g. Think about a likely trajectory of civilizational collapse, and what’s needed to restart it. Figure out a narrow subset of the problem (fertilizer?) and how would you do this if you were in charge of making it happen.
Maybe do some desk research on what’s already been done in the space, and try to branch out from there.
Because I always implicitly assumed “the government was handling it”, and that you’d need 30 years of civil service experience before getting to do anything.
Which governments in particular are you thinking about? I guess my perspective (which to be clear is more theoretical; I can definitely be corrected by empirical evidence!) is that basically no government has ever put serious effort into this. Like why would they? This seems far outside of their organizational mandate, and even many EAs I know of don’t want to work on this because of the long time to impact and potentially grim worldview.
What would you recommend to get started on forecasting? I have a fairly above-average track record especially for unusual occurences, and would not mind doing it as a hobby or full time.
My issue was that I hated seeing decision makers ignore my findings/predictions, hence why I first went so hard into advocacy.
I think if you built up a legible track record for your past predictions, and have good, clear, arguments for each of your future predictions, then decision-makers in EA will probably pay attention. And at this point, EA is a large enough force in the world that you can have a lot of impact just by making EA decisions better.
Another spiritual EA predecessor is the Efficiency Movement . Focusing on improving the efficiency of existing institutions to improve public welfare, EM had a lot of support from philanthropists like Rockefeller and Carnegie, politicians like Theodore Roosevelt and Herbert Hoover and other notable figures like Brandeis and Bedeux. While not a 1-to-1 comparison, I think the focus on methodology and process analysis to maximise societal good across diverse cause areas appealed to the same demographic.
It’s like if EA managed to secure the backing of all the FAANG founders, several POTUSes and broad support in activist organisations. Apparently, it did inspire business process innovations, modern conservationism and antitrust initiatives, tackling major issues of the time.
I can’t find info on the “downfall” of EM. It seems to have just faded out of the public spotlight postwar. Perhaps it focused too much on few influential decision makers instead of appealing to the public. Like how Elon Musk, Vitalik Buterin and SBF are extremely supportive of EA, but the public doesn’t associate any of them as “EA public figures” unless they actively look.
I think it’s an exaggeration to say that Elon Musk is extremely supportive of EA.
I mean, SpaceX, OpenAI, Neuralink and Tesla all have goals ostensibly related to x-risk and longtermism, and hve made strides in those areas. I can’t think of many billionaires running one x-risk/longtermist focused company, let alone several.
Plus his review of What We Owe The Future is “Worth reading. This is a close match for my philosophy.”
I get that he’s controversial and objectionable in other areas, but he’s clearly a longtermist.
Thanks for the response! I agree that he’s a longtermist philosophically, or at least acts this way. I think he’s not a “community EA” per se. For example, he founded OpenAI despite strong opposition from this community. He kicked Igor out of his Foundation. And also the object-level of his ideas for improving the future aren’t very good (though I like Boring Company, I’m a big fan of digging holes).
Well … I’m not surprised, although I did not specifically know there was bad blood there. Were people reacting negatively to monetizing Pandora’s Lootbox, so to speak?
Also, I went down the rabbit hole of your profile for a bit; I’m broadly curious what interesting megaprojects you’d personally like to see attempted? I’m risk-taking, bored and like to think I’m entrepreneurial, so I’d like to take a wack at some speculative ideas that don’t exist yet.
edit: bruh i saw the edit, now i have to edit
Well honestly, I think EAs on an individual+philosophical level disagree more than they publicly show. Just meeting actual EAs, I see the occasional clash between neartermism vs longtermism, messaging, community building, animal vs human prioritisations. Of course this hasn’t spiral into anything bad, but it’s there. I think if every EA was billionaire, we’d see more awkward distancing as everyone pushes forward with their own thing. I don’t think Bill Gates actively associates with AMF, Givewell or EA even though the Gates Foundation is actively working to eradicate malaria among other illness.
As for the object-level, yeah some are weird, but the net benefit is still great. I chalk it down to duds being inevitable when you take that many risks. For comparison, Steve Jobs certainly drove the consumer electronics industry forward, but he oversaw plenty of stupid duds that almost destroyed Apple. And he almost named the MacIntosh the “Bicycle”.
Sorry, I often edit my comments after a first pass when I realized I had something important to add. I usually don’t warn people if I do it within a few hours after commenting.
I’m not sure exactly what you’re referring to, but people didn’t like the AGI meme being spread very widely, and OpenAI’s Theory of Change was pretty bad, even if you take Elon Musk’s comments at face value. Elon viewed artificial general intelligence as “summoning a demon” and inexplicably wanted to democratize it. This is a bad theory of change because democraticizing demon-summoning is not a good way to make sure the world is not overrun by demons. OpenAI is relatively more circumspect now but many other groups have jumped on the AGI hype train. It’s not clear to me whether this is inevitable vs. accelerated by the creation of OpenAI + impressive demos from them.
Thank you so much for your interest! Most projects I’m excited about are pretty high-context. Are you going to EAGx Singapore? I’m not planning to go myself but I can see if any of my coworkers will be excited to shill for potential longtermist megaprojects there.
I’m volunteering there at EAGx, since I’m a matriculating student at the university itself!
In general, I’ve just been trawling around for more interesting projects to work on that have high uncertainty and require various skillsets. I get very uninspired with normal school/academic stuff, and I’m noticing other people can climb the career ladder better than I can so might as well lean into building. So I’m just starved for ideas.
Some examples of things that I’d be excited for generically smart + entrepreneurial junior people to try.
Design better PPE (pushing the envelope on protection, comfort, or cost)
AI alignment research distillation
Identifying/summarizing the relevant resources for rebuilding after civilizational collapse.
Broadly, coming up with a plan and executing quickly on creating and distributing “civilizational restart manuals”
Ventilation and sanitation projects that pushes the envelope on either protection or cost.
Getting better at forecasting and/or forecasting adjacent things
An example of a forecasting-adjacent thing is organizing a team of forecasters to work on specific problems
Another example is creating data pipelines to help forecasters forecast better/faster
Cheaper/faster versions of any of the above, at the potential expense of quality.
The trick is not hogging up/monopolizing the space, so it’s easy for others to move in.
High-quality translation of EA, longtermist, or rationality materials into local languages.
Doing a kickass job at university group or other community building.
Joining an existing EA or longtermist-oriented org working on important longtermist priorit.
Please note that while these things are all directly valuable, most of the value in doing them is a combination of skill-building, network building, and helping you to orient yourself to doing useful EA work. Also please note that I’m just one generalist researcher who has very briefly thought about each of this problems, not an expert on any of these problems and certainly not an expert on your own career options.
hmmmmn further implementation questions hehe
The community building part doesn’t surprise me, I’ll read stuff other people have written, unless there’s an unpopular take you have that you’d like to mention. And of course shotgunning program and grant applications
I’m interested in the civilisational collapse thing. I always considered it a valid and meaningful area of investment. I’m curious how you’d recommend someone early in their career get in on this? Because I always implicitly assumed “the government was handling it”, and that you’d need 30 years of civil service experience before getting to do anything.
What would you recommend to get started on forecasting? I have a fairly above-average track record especially for unusual occurences, and would not mind doing it as a hobby or full time. My issue was that I hated seeing decision makers ignore my findings/predictions, hence why I first went so hard into advocacy. I struggle to see how forecasting could go mainstream, but I find I’m earlier to trends than I expect.
E.g. Think about a likely trajectory of civilizational collapse, and what’s needed to restart it. Figure out a narrow subset of the problem (fertilizer?) and how would you do this if you were in charge of making it happen.
Maybe do some desk research on what’s already been done in the space, and try to branch out from there.
Which governments in particular are you thinking about? I guess my perspective (which to be clear is more theoretical; I can definitely be corrected by empirical evidence!) is that basically no government has ever put serious effort into this. Like why would they? This seems far outside of their organizational mandate, and even many EAs I know of don’t want to work on this because of the long time to impact and potentially grim worldview.
Probably start an account on Metaculus or Manifold Markets and just start predicting? You can also find study materials later, like the book Superforecasting and this youtube series.
I think if you built up a legible track record for your past predictions, and have good, clear, arguments for each of your future predictions, then decision-makers in EA will probably pay attention. And at this point, EA is a large enough force in the world that you can have a lot of impact just by making EA decisions better.