Michael Simm is a disruptive systems expert and nonprofit founder focused on fighting homelessness and poverty. He earned his Bachelors in Political Science and Minor in Innovation in Society from ASU’s honors college in under two years. During his time at ASU, he published a peer-reviewed paper about how a Universal Basic Income would brighten the future of our economy and society. Having come to the undeniable conclusion that guaranteed income is the most effective way to fight poverty, he envisioned a nonprofit that could unleash the massive anti-poverty potential of guaranteed income.
In addition, he is a generalist in understanding disruptive policy, technologies, and their implications for the future. He knows a medium amount about most topics in the realms of politics, and finance, and a great deal about renewables, guaranteed income, and complex system analysis.
Michael Simm
As someone who only recently got involved with EA (I read some books in early 2022, then took an in-depth EA course last summer), I’ve been watching this and have a few general thoughts on what I’ve seen & what I think about EA going forward (Not just as an ideology but as a community). Sorry if this seems a bit off-topic since I can’t say I feel particularly ‘compromised’.
Personally, I was not surprised that FTX collapsed & was a fraudulent organization. I had been following the crypto space for a few years and SBF was an incredibly obvious con man in my view. I didn’t even know that Future Fund was part of FTX until the situation blew up and I was like “oh great” when I heard EA come up in the news.
I was rather disappointed in EA leadership for not being more cautious about SBF intrusion into the space. When someone shows up from an industry well-known for fraud and instantly becomes a meaningful % of overall movement funding, someone has to do an in-depth audit before distributing goodwill.
However, I think the way the EA community reacted to the situation on this forum was positive. I also think that leadership going through this experience will substantially reduce the likelihood of a similar (or worse) thing happening again in the future (both near and long term) since they will enforce much stronger regulations. It may have been good for EA to suffer a disaster that rattled everyone but has not destroyed the movement.
Overall, I was generally pleased with how most of the community responded (I think the infighting narrative was overblown). Going forward, I’m happy to describe myself as an Effective Altruist, and I think that we’ve got a bright future ahead. In fact, I think EA is likely to be more positively impactful than much of the rest of the trillion-dollar global nonprofit industry.
To people who have been beating themselves up about it: It wasn’t your fault, don’t beat yourselves up about it. I have seen most people react very intelligently and understand this for what it was: A major failure of leadership & regulations, and an opportunity to ensure that it doesn’t happen again while also not getting derailed from the strong philosophical basis that EA has.
This sounds like a fantastic concept! I can’t wait to see how this gets refined and scaled up to improve how more major foundations go about making grants (maybe even $100M/Year+ Foundations).
I had a quick question about the types of foundations involved in your pilot. What kind of limits, if any, did they have on where they can disburse funds or what kind of people (or animals) they can aid?
A lot of foundations I’ve seen are only able to make grants in a particular state or city, usually in a developed country where $1 fundamentally goes less far than giving to an international charity. Did you end up having conflicts between the participants’ desire to maximize impact, and any limitations they might have been subject to?
I think this could be a very useful tool for improving public knowledge about the uncertainties in various areas of science.
But I wonder how can prediction markets be effectively implemented in decision-making processes.
For example, if there’s a vaccine going through testing, or an intervention being studied, do you think there could be a smart way to integrate prediction markets into more optimal policy decisions?
You could also go the other way and call out interventions, policies, and other currently implemented things based on uncertainty in the prediction markets.
Congrats on this idea!
Since CEAP is considering advocating for mandatory transparency reporting for international non-governmental organizations (INGOs), how does CEAP plan to address potential pushback from governments or INGOs to its policy recommendations and especially transparency regulations?
I imagine many governments and/or INGOs might push back on reporting some things that don’t make them look particularly good. For example, if an org is unwilling to do an independent cash benchmarking study, or a government is unwilling to say that their prior interventions (that they may have spent/wasted billions on), they could reject CEAP’s recommendations.
How do you navigate between being a lobbying organization and an advocacy organization?
While it’s understandable to want to take action and implement some form of regulation in the face of rapidly advancing technology, it’s crucial to ensure that these regulations are effective and aligned with our ultimate objectives.
It’s possible that regulations proposed by neo-Luddites could have unintended consequences or even be counterproductive to our goals. For example, they may focus on slowing down AI progress in general, without necessarily addressing specific concerns about AI x-risk. Doing so could drive cutting-edge AI research into the black market or autocratic countries. It’s important to carefully evaluate the motivations and objectives behind different regulatory proposals and ensure that they don’t end up doing more harm than good.
Personally, I’d rather have a world with 200 mostly-positively-aligned research organizations than a world where only autocratic regimes and experienced coding teams—that are willing to disregard the law—can push the frontiers of AI.
Your suggestion of having multiple individuals or groups independently calculate the expected value of an intervention is an interesting one. It could increase objectivity and reduce the influence of motivated reasoning or self-serving biases and help us end up with not only better judgments but several times more research & considerations.
Do you know of any EA organizations that are considering it or any prior debate about this idea in the forum?
It would be interesting to see if this method would lead to more accurate expected value calculations in practice. Additionally, I am curious about how the process of comparing results and coming to a consensus would be handled in this approach.
Absolutely, I think I misunderstood your differenciation between ‘direct for imact’ and ‘high impact in general’ for-profit companies. Althouth there is certainly a line, part of my thinking comes from the idea that most new companies were created to to solve problems of various sorts. So the bigger the problem, the bigger the opportunity for profit while also making an impact.
I think there could be a good case for having more EAs trying to get into decision-making levels of management at non-ea for-profit companies. Such priorities could be especially important at social media companies or companies like Neuralink. Imagine the movement-building benefits of EA’s at the top of major social media companies (I guess Elon Musk might sort-of be considered EA adjacent with Twitter)?
I agree with your thesis and want to dive deeper into a few historical examples.
The iPhone was a profitable business idea built by Steve Jobs to make money. While it definitely did that, the iPhone (smartphones) also revolutionized how people communicate, significantly increasing the capability of almost everyone in society. There’s a good argument to be made that the proliferation of good smartphones significantly accelerated poverty reduction efforts globally and likely the EA movement itself.
Another example, that we’re seeing come to fruition this decade, is the transition to clean energy and transportation. While perhaps not an X risk, the earth could never sustain human civilization indefinitely with unsustainable energy sources like fossil fuels—especially since burning said fuels at scale is making life more difficult for everyone over time. Having done a great deal of research into renewables and EVs, it’s clear to me that the primary obstacles to solving this problem are energy storage (with batteries being the primary industry), and generation (wind and solar are intermittent, thus energy storage is required).
I think there’s a very strong argument to be made that one company, Tesla, has achieved its goal of accelerating the energy storage industry and the electric vehicle industry by at least a decade, to the massive benefit of humanity. Before they proved the profitability and objective superiority of electric vehicles (2019-now), almost zero global manufacturers of vehicles were planning to transition away from gas cards before 2040 or 2050.
By far the most important bottleneck in the entire transition is the rate and cost at which energy storage (Batteries) could be produced. By bringing EVs to scale, Tesla has brought down the price of batteries from over $1,000 per kWh to about$100 with plans to reach $60 in a few years.
I followed a lot of the development of both of these companies but I was working on grassroots policy advocacy, green new deal type of stuff from 2018-2020. After a while, I realized that all of my impact at a large scale was completely negligible compared to the massive impacts that Tesla was making while also making a ton of money for shareholders.
Tesla and Apple are only two examples, a lot of the major inventions and companies based on those inventions drastically increased the quality of life for millions or billions of people and should not be discounted against charitable purposes. With some new companies, I think it could be far more impactful to join a profitable company that is building infrastructure for the future than a medium-impact charity, but it’s difficult to quantify that.
The one that comes to my mind is Neuralink, which could prove transformative for the entire human experience within 3 decades. While a profitable company, it’s important that they take care to ensure safety from technical problems and corruption problems when proliferating BCIs as it could go very wrong or right. In fact, I think 80,000 hours would be wise to direct as many effective altruists as possible toward Neuralink. It was, after all, created to help humans cognitively catch up with AI such that we can successfully influence it in positive directions and ‘go along for the ride’. It’s a different approach to reducing AI risk that could also prove transformational to human civilization.
Sorry for the lengthy comment, maybe I should make the Neuralink paragraph its own post. I’d love to know what you all think of Neuralink & working at profitable companies making large (hopefully positive) impacts.
Thanks for helping me edit the post that I just finished!
I wasn’t kidding about having a plan to not just outperform GiveWell Top Charities, but fully fund all of them—as a side project no less...
Introducing The Logical Foundation, A Plan to End Poverty With Guaranteed Income
Please check it out (and upvote so more people see it) and try to find any holes in the plan, that’s largely why I put it here.
PS: I really did have such a great time with my EA program this fall, I actually got our 501(c)(3) determination letter in the middle of it. Do you have any thoughts on the ‘longtermist Implications’ section? Maybe even share it with your cohort?
Guaranteed Income is generally defined as regular cash payment accessible to members of a community, with no strings attached and no work requirements. There’s no minimum amount of regular payments, and the “Guaranteed Income Movement” is a thing growing under that verbiage. I use GI instead of UBI because UBI means every person in an entire geographic region.
You’re right that it can’t completely scale with just private funding, we’ll have to apply for funding and advocate for government grants at all levels and across the country under the same platform. We can do this without getting in 501(c)(3) trouble with the IRS for politicking.
The thought process is, “if we get enough people guaranteed income, they will be very loud about how awesome it is and the rest of the population will demand national UBI policy.” Then repeat in every country.
Thanks for reading this 30-minute thing. I first wanted to make a short 5-minute read but I realized that many of you would probably really want all of the evidence laid out clearly, and our plan explained in excruciating detail. - so you can point out the super obvious reason why this has a 0% chance of success, that I’ve completely overlooked - despite my search for fundamental issues since I came up with the idea, and asking all of the experts I can find.
The EA community is probably the most knowledgeable community in the world about helping people. Considering the world-changing impact potential I’ve outlined, even a 1% chance of success would justify spending millions to make this happen. Unless of course, someone can find a problem.
Please find a fundamental, first-principles problem with my plan, and, if you can’t, please help us succeed. My challenge for you: Find an insurmountable problem, or help us change the world.
I think our odds are more like 40% without support from EA organizations and as high as 75% with your support. The faster we can grow (but not too fast), the faster we may be able to end poverty. Time is very much of the essence.
Introducing The Logical Foundation, an EA-Aligned Nonprofit with a Plan to End Poverty With Guaranteed Income
Hi all, I’m Michael Simm. I am a nonprofit entreprenuer focused on disruptive systems (eg, understanding and using emerging technologies to make the future better). I’d love to see how much if at all people in EA are familiar with disruptive technologies and how open they might be to learning about a new one, one that might impact EA greatly.
When I first became interested in disruptive technologies, I had my focus on climate change. I quickly identified electric cars, solar panels, and energy storage (particularly batteries) to be on the verge of upending reliance on fossil fuels and global transportation systems. Then, I ran across Dr. Tony Seba, who was one of the only people to accurately predict the massive price declines of solar, electric cars, and batteries. He’s now doing fantastic research into the coming disruptions in energy, transportation, and other areas with an organization called RethinkX.
RethinkX has predicted, among other things, that there will be almost no gas cars sold by 2030, and that the animal agriculture industry is headed into widespread bankruptcy (which would be very good for animal welfare interests). They’ve found that disruption generally happens when any system proves 5X better than the incumbent one, thus opening a huge opportunity space.
The nonprofit I founded, is designed to leverage the disruptive potential of the most cost-effective anti-poverty intervention in developed countries (guaranteed income), and use it to make a big impact against homelessness and then poverty over time. This could sound far-fetched, but I think that our pilot project is likely to outperform a lot of EA developing world interventions in cost per QALY. I’m working on a major post to introduce it later this week so please reach out if you’d be interested in contributing.
Ah, thank you this does add good context. If I was an EA with any background in finance, I’d probably be very upset at myself about not catching on (a lot) earlier. Since he’d been involved in EA for so long, I wonder if he never truly subscribed to EA principles and has simply been ‘playing the long game’. I’ve seen plenty of examples of SBF being a master at this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us.
I had heard of him only a few times before the crash, and mostly in the context of youtube clips where he basically described a Ponzi scheme, then said that it was ‘reasonable’. The unfortunate thing is that FTX’s exchange business model wasn’t inherently fraudulent. There was likely no way for anyone outside the company to know he was lending out users’ money against their own terms of service (apart from demanding a comprehensive audit).
Ultimately it doesn’t look like he’s going to get away with it, but it’s good to be much more cautious with funders (especially those connected to an operating non-public company) going forward.