EA/Rationalist Safety Nets: Promising, but Arduous
Rigor: Quickly written (~6 hours). Originally made as a Facebook post that emphasized the “Potential Challenges” section. There’s some discussion there.
Epistemic Status: This mostly comes from personal experiences and discussions with community members in the last few years.
Many thanks to Aaron Gertler, Stefan Schubert, Julia Wise, and Evan Gaensbauer for feedback directly on this post. Also, thanks for everyone involved in the Facebook discussion.
Introduction
I’ve been around EA/rationality for several years now (starting in 2008, during college). I’ve seen several instances where promising people (myself included) could have really used some help.
Potential help includes:
Money
Good mental health support
Friends or helpers, for when things are tough
Insurance (broader than health insurance)
I’m based in the United States. Government benefits here are substantially worse than in some European countries, so it’s possible these concerns don’t apply elsewhere.
I think interventions in these areas could be valuable. However, I believe they’re unusually challenging to implement. I encourage future groups tackling this space to plan accordingly. I also hope that people upset with a lack of existing infrastructure can sympathize with the challenges around it.
Also, see this post for related discussion: An Emergency Fund for Effective Altruists.[1]
Evidence of the Problem
Some evidence of what I’m referring to includes:
Howie’s podcast with 80,000 Hours went into detail about hard times he’s had and help he received. I’m thankful that this information was made public. It’s probably the best case study we have now within our community. I think it’s pretty clear that Howie Lempel is doing great work, and his situation might have turned grimmer if things had gone a bit differently, including if he had less support. While I’m happy he had this support, this support seemed exceptional compared to other things I’ve seen and would expect.
I have had a few scares. Earlier on in my career, I was very low on money and had health problems that I was worried might put me on disability or worse.
I’ve known several effective altruists and rationalists who have been very low on funding for some possibly-crucial parts of their lives. Some have gone on later to do very well, but it’s easy to imagine it going differently. Unlike with Howie Lempel’s case, they had rough spots before they did valuable work.
In general, the situation for many people in the United States (including and outside of our communities) seems pretty bad, so the prior is poor. Many people have feeble social support structures. (There’s quite a bit of literature on this topic.)
Related Infrastructure
Our communities already have a few valuable initiatives. Some of these include (just off the top of my head):
The CEA community health team
REACH (no longer running)
Perhaps we can learn from religious communities. A while ago, I chatted to a Mormon effective altruist who explained how their system works. Mormons regularly tithe 10% of their income to the Mormon church, but in return, the church seems to take care of them when they’re down. I’ve recently been watching videos from Peter Santenello about the Hasidic Jewish and the Amish communities, and they seem to have similar systems.
Potential Challenges
At this point, there are some wealthy people in and around effective altruism. So if there are straightforward spending opportunities that would be competitive through an EA lens, there could be funding for them.
Unfortunately, I think setting up a safety net would result in several nasty complications. These might be particularly grueling if the program were intended to itself be an “effective intervention” instead of a “community pool, funded by and benefiting regular effective altruists.”
These complications include:
It’s tough to discern “I’m giving money to people with high EV” from “I’m giving money to friends and people I want favors from.” So I think anyone who tried to do this would have a complex case to make, and onlookers would assume it was corrupt. Additionally, I beleive such a process would be ripe for potential corruption.
Decisions about who to exclude are some of the least enjoyable decisions. There are tons and tons of people out there with horrible situations. Some people are particularly good at putting together sob stories, and others have critical stories but are too polite to speak up. It’s kind of like the real-life version of Papers, Please.
It’s easy to conflate “a good-hearted person who sort of morally “deserves” money, but is unlikely to produce much social impact” from “some jerk we don’t like, but we expect to produce more social impact.”
Many people hate being evaluated in this way. People don’t like being rejected for jobs, and this might be more intense, as it might have to be a broader estimate. (Unlike with employment, you couldn’t claim, “Maybe you’re high-value, but you’re not a fit for this specific role”). If the application process were to take place when someone’s in need, then that person might already be in an emotionally challenging place.
Social safety nets make it harder to leave a community or pursue other, independent goals. It seems really unhealthy to have a situation where someone feels like they need to signal their belonging in a community or overstate their impact in order to get basic food or psychological services. Similarly bad, people who don’t believe in effective altruism, but need a safety net, might feel pressured into trying to shmooze with the right people and pretend.
It’s hard to tell who should qualify for services. The critical data might be confidential. Applicants want this to be done emotionally — “just talk to me, and I’ll explain it” — but this seems to me like the least objective or high-quality way to make the decision.
Where they exist, non-governmental community-wide safety nets are created by religious organizations. Creating an EA variant might make EA seem weirder.
I think the number one problem here is that we’re just in a harsh world, and there are lots of great people out there going through callous times. I find much of the situation globally heartbreaking.
But trying to fix it for the “most altruistic people in expectation” is difficult.
On reflecting over this list, I think many of these concerns are common for social workers and similar professions. I imagine they could be overcome with the right efforts.
Possible Research Questions
If you don’t want to set up an organization yet, but you are interested in doing investigation on this topic, here’s a list of quick questions I have at this point.
Questions for individuals who might get into challenging circumstances:
Are economic/health challenges common in EA, and how severe are they?
How severe are downwards spirals that come from health/mental health/poverty, for people in our circles?
Are there any suitable preventative measures people could take? Like, insurance options or clever health measures?
Questions for individuals interested in making better programs:
How good or bad a job are we doing now?
What should the EA strategy be, if any, for single-time payments?
What social infrastructure do we currently have in place for identifying and helping people in need? Can we improve this using simple techniques, like having a Google Group for funders interested in making 1-off payments?
What do other communities do (including religious communities)? What might we be able to learn from them?
Would safety net programs make more sense as community-wide initiatives without high effectiveness requirements, or as effective charitable interventions? Or maybe somewhere in-between?
What sorts of skill sets might we want for setting up these programs?
[1] Note that I wrote this post on Facebook a few weeks before this other post came out. However, I converted the Facebook post to an EA Forum post earlier because of that piece. (Just in case more people were actively considering setting up such a service).
An idea for addressing the challenges is to make the safety net something that only a “genuine EA” would find attractive. For example, you get free room and board in a house with other EAs in a low-prestige + low-rent location, with mandatory EA volunteer hours (perhaps spent helping other inhabitants of the house with their issues?) Only vegan food is served, and the length of your stay is capped at N years. I’m not sure it’s necessary to be 100% resistant to outsiders with sob stories; I’d say the important thing is that outsiders with sob stories should be able to market those stories elsewhere & get more of what they want. Also, even if they fake their way into an EA support group like what I described, they might find they absorb EA values and identify as an EA at the end… lol.
This sounds like an almost exact description of the EA Hotel (CEEALAR), which is mentioned in the post. I think this does a pretty decent job of selecting for ‘genuine EA’ people
Although I don’t think they have mandatory volunteering?
Having listened to the 80K podcast with Howie Lempel it seems that for him it was important to get out of a context where he was with EAs for work and friendship for a time in order to recover. So I’m not sure for which cases this would actually be a good solution.
Good point. However, since Howie was employed at an EA organization, he might be eligible for the idea described here. One approach is to implement several overlapping ideas, and if there’s an individual for whom none of the ideas work, they could go through the process Ozzie described in the OP (with the associated unfortunate downsides).
Some anecdata that might or might not be helpful:
As I mentioned on FB, I didn’t have a lot of money in 2017, and I was trying to transition jobs (not even to do something directly in EA, just to work in tech so I had more earning and giving potential). I’m really grateful to the EAs who lent me money, including you. If I instead did the standard “work a minimum wage job while studying in my off hours” (or worse, “work a minimum wage job while applying to normal grad jobs, and then work a normal grad job while studying in my off hours”) route, I think my career trajectory would’ve been delayed for at least a year, probably longer.
Delaying my career trajectory would’ve cost ~100k Ev if I just stayed in tech and was donating, but I think my current work is significantly more valuable so I think it would’ve cost more than that.
The main counterpoint I could think of is that minimum wage jobs are good for the soul or something, and I think it’s plausible that if I worked for one long enough I would be more “in touch” with average Americans and/or been more generally mature on specific axes. I currently do not believe the value of this type of maturity is very high, compared to my actual counterfactual (at Google etc) of the skills/career capital gained via having more experience interacting in “elite cultures,” being around ambitious people, or thinking about EA stuff.
Thank you for the overview! What comes to my mind as similar is the Künstlersozialkasse (KSK) in Germany that is ruled by the special law called Künstlersozialversicherungsgesetz.
This artist social fund is open to anyone that works self-employed in an artistic job like visual artists, authors, journalists, musicians etc. and doesn’t have employees. You have to fill out a 9-page form to apply where you state what work you have already done and that you’re over the minimum income from artistic work of 3,900€/year. In the first three years of your work life, you don’t have to prove this minimum and this was also deferred during Covid-times.
If you get accepted then the KSK will pay the cost of your health-, care- and pension insurance which includes for example doctors, clinics, medicine, psychotherapy, rehabilitation clinics and dentists. You will have to state your income yearly and pay a portion to the KSK.
The KSK is financed by three sources:
Payments by artists (50%)
Payment by the government (20%)
Payment by clients that employ the artists (30%)
My company has to list all artists invoices that we paid in a year (by graphic designers, photographers, make up artists etc.) and submit it to the KSK. We are then charged a percentage (currently 4.2%) of this. Every company and self-employed person in Germany has to do this.
An analogue in EA could be a system where you:
have to prove that you’ve
either got EA funding for your work
you’re working at an EA org without insurance
you’re in the first 3 years of your EA career
pay a portion of your EA salary for the insurance
the insurance covers health insurance and other insurance-like services
Funders fill up the gap of the payments
This model would still have the issue of vetting applicants, but one clear criterion would be that you can only get in for the first three years without showing that you have minimum funding through grant approvals or EA-aligned jobs. If you don’t earn any EA money after that you would get excluded.
So thinking about how this works overall:
The government is providing a 20% subsidy
Any artist who receives insurance from their employer subsidises those who don’t
Artists outside of the KSK (either because they’ve not been very successful or they choose not to join) subsidise those inside
Highly profitable artists subsidise the less profitable ones (egalitarian component likely works better here as there is more variation in what artists earn, however, EAs are probably more happy to cross-subsidise each other)
People hiring artists have to complete additional paperwork
The total compensation for artists is likely slightly higher because people forget about these additional costs when considering what they can pay
I wasn’t as precise as I could and will try to clarify:
The German health-, care- and pension insurance system is set up where employees and employers each pay 50% of the fees. The fee is defined as a percentage of the income. High-income earners subsidise low-income workers in this way.
The KSK is a system on top only for self-employed artists who typically have to cover the 50% share that an employer would cover. 50% of the insurance is paid by the artists (same as what employees would pay), the government subsidises 20%, and clients cover 30%
Clients have to pay without knowing if the artist is part of the KSK, so there is some additional subsidising.
The additional paperwork for clients could be reduced if the artists would be allowed to collect the payments themselves, which I would like better.
I’m not in favour of how the KSK system works and wouldn’t recommend it as a model. However, I think their way of identifying an artist by type of work and minimum revenue from this work area is an interesting input.
Thanks for writing this good overview of a perennial topic.
Paying people higher salaries for EA jobs might be an alternative approach to at least part of this problem. It would allow people to save to protect themselves from future unemployment, without the difficult vetting and bad incentive effects of ‘insurance’. It doesn’t help people very early on in their careers, but probably no insurance product would either, as these people would often not have built up a credible history of contribution anyway.
Agreed that higher salaries could help (and are already helping). Another nice benefit is that they can also be useful for the broader community; more senior people will have more money to help out more junior people, here and there.
I imagine if there were an insurance product, it would be subsidized a fair amount. My hope would be that we could have more trust than would exist for a regular insurance agency, but I’m not sure how big of a deal this would make.
Another idea is a safety net which estimates the opportunity cost associated with taking a low-paying EA role and caps the financial support at said opportunity cost. Potentially a much cheaper way to achieve the same end result.
The best approach might be to have people register for this safety net as soon as they get an EA role, so they can argue for a particular opportunity cost at that time and know how much “insurance” they’re getting.
Perhaps, although it may also increase the number of people working uncompensated.
For people how have taken the further pledge an increase in salary would be less valuable than insurance that is paid by the employer. This might be a case that is only relevant for a few people, however, they might also be part of the most dedicated group.
Hey, I wrote the article you refer to. I only intend to partially reimburse people who donated money to EA-related causes. Most problems you describe apply to a safety net for all effective altruists, which would be much more difficult. I’ll quote a comment of mine:
I believe this covers all points you raised, but let me know if I missed anything. Just to reiterate, my hypothetical charity wouldn’t make a judgment on whether applicants are still effective altruists if they need money.
The top comment on my article:
Do you agree with this?
I liked your post a lot too, and I think it would be a good starting point precisely because it would be simpler and easier to avoid corruption by creating a safety net with a fixed group of members (people who had submitted evidence of donations) and capped payouts (50% of their total donation amount, or etc), rather than having a charity-style organization that evaluates applications from anyone, like EA Funds.
Mormon/Amish/etc social insurance through church works well because there is a pretty clear, pretty hard-to-fake signal of who’s a community member and who’s not (ie, do you spend every sunday in church or not). Normal insurance companies create a clear distinction between members and nonmembers by requiring everyone to pay a monthly premium. The EA and rationalist communities will probably always be more amorphous and fuzzy than a typical Amish group, but if we just required that everyone pays a monthly premium then it’s unclear how we could do any better than existing insurance companies. So I like the idea of deciding membership based on proof of past charitable donations to EA causes.
I also agree that a service like this (allowing people to “get their donations back” from the community pool if they unexpectedly fell on hard times) might encourage people to donate more or take on more risks in the first place, which would be good for EA overall.
I think it would be good to start experimenting with the service described in your post. Over time, if successful, the insurance pool could try to branch out into other more advanced services—perhaps helping people make risky but high-expected-value career moves by offering them some kind of insurance or support in case their ambitious career move fails. Or doing the kind of community-assistance grantmaking that Ozzie is exploring here.
The biggest wins probably come from finding more good ways to support early-career people facing precarious situations while just getting into EA, exactly like Linch’s story above. The “get back your proven past donations” approach won’t work as well for people in those situations since most of them won’t have made many EA donations yet. But hopefully we could try to build up to that over time somehow.
(Just noting general agreement with this)
I agree that your proposal gets around most (maybe all?) of the issues I mentioned. However, your proposal focuses on earning-to-givers who have already given a fair bit, this seems to be tackling a minority of the problem (maybe 20%?). Maybe this is a good place to begin. I feel like I haven’t met many people in this specific camp, but maybe there are more out there.
I’m happy to see it on a small scale. That said, the existing discussion/debate doesn’t seem like all too much to me. I also feel like there could be some easy wins for research, like doing some investigation into the questions I linked above.
I’d expect 1-8 weeks of investigation would be the best next step. (Note that “investigation” could mean “interviewing a bunch of people to see what they might want”)
Ah, that’s where we went wrong. I assumed you would have mentioned that if you thought so.
I agree, and it is quite challenging to determine the size of that minority. If anyone knows anyone who has been in this situation, please send me a message.
Will do. No one comes to mind now, but if someone does, I’ll let you know.
(Also, I’m sure others reading this with ideas should send them to Bob)
The Nonlinear Fund is working on addressing this problem for people in AI Safety (My guess would be they will start with people at orgs, then possibly expand it to people on certain grants, I interned there a while ago, so I don’t know the current plan).
Gavin Li is working on EA Offroad for people “not constituted for college” or who would find attending college challenging due to their financial position.
I would really like to see the establishment of more EA Hubs in cities that are more affordable. I think that the financial challenges a lot of people are facing are the result of trying to support themselves in some of the most expensive cities that there are. That said, there seem to be a few projects starting in this space, so I would probably encourage people to support existing projects, rather than starting more.
I’m not exactly sure of the scope of Magnify mentoring (previously WAMBAM), but it might be able to provide some support helping people figure out their lives. If not, then perhaps someone should create a mentoring service more focused on helping people improve their lives.
Further ideas:
Bountied Rationality—I’m sure that there are a lot of small, useful, and accessible tasks to do. Perhaps someone should apply for funding in order to post more bounties here. (Argument against: bounties are generally winner-takes-all so they can easily result in people burning up a lot of time without receiving any money in return)
On a similar, but slightly different note, the AI Safety Fundamental course is now paying facilitators $1000. Having more of these kinds of opportunities available seems positive.
Programming bootcamps—a lot of EAs are capable of becoming programmers and this could provide a path to financial stability.
Some kind of peer support project with group facilitators receiving training from professionals.
Something like Y Couchinator to help EAs share their free rooms.
Exit grants. In some circumstances, it might make sense to award exit grants to people who were funded/employed productively for a reasonable period but have now become unproductive. These grants should probably be awarded privately with only the total number of grants and dollar value reported.
Final thoughts:
Given all the excellent points you make about the challenges of such a fund, I believe that it’s important to have a wide variety of other means of support. Nonetheless, I suspect that a more traditional assistance organisation would be valuable, so long as there was proper communication about its role, specifically, the limits on how much support it can provide and that the organisation wouldn’t be able to help everyone.
That all sounds pretty good to me. I like the idea of a wide variety of means of support; both to try out more things (it’s hard to tell what would work in advance), and because it’s probably a better solution long-term.
Kudos for this post. One quibble I have is, in the beginning you write
But later you focus almost exclusively on money. [Rest of the comment was edited out.]
Good point about focusing on money; this post was originally written differently, then I tried making it more broad, but I think it wound up being more disjointed than I would have liked.
First, I’d also be very curious about interventions other than money.
Second though, I think that “money combined with services” might be the most straightforward strategy for most of the benefits except for friends.
“Pretty strong services” to help set up people with mental and physical health support could exist, along with insurance setups. I think that setting up new services that are better than existing ones, but much more limited in scope, is possible, but expensive (at least in the opportunity cost of those who would set them up.)
Some helpers when things are rough could in theory be hired.
Encouraging more friendships seems pretty great, but very different. I imagine that’s more about encouraging good community structures/networks/events and stuff, but I’m not sure.
I also want to encourage you and others reading this to brainstorm on the topic. I don’t have any private knowledge, and I imagine others here would have much better insight into much of the problem than I do. (I’m on the older side of EAs now, and am less connected to many of the new/younger/growing communities)
I think this is a good point. One possibility of addressing this could be on the level of local EA groups giving organizers the tools and education to identify struggling members and help them better. As a local organizer, I would find additional resources helpful, especially if they are very action-orientated.
Effective and easy intervention: Help EAs new to your city settle in
Many EAs move to Berlin for jobs, many of them (especially non-Germans but also Germans) don’t find good housing right away and some find it difficult to make friends (for example if they only found housing far away from the city center). A single 1-1 conversation / chat with some advice and introductions to people who share their interests can really make a difference, and it’s easy to do: Just reach out to new people on your local meetups, in your local EA facebook group etc and offer them help in a friendly, respectful & non-obtrusive way (make sure you don’t want to come across weird, creepy etc). Ideally coordinate with your local group organiser on how to best do that (and if you don’t have a group, contact CEA and set one up! :))
(Very small point) From my understanding REACH is no longer operational
That’s a shame to hear. Is there a write-up anywhere?
Yep. Sorry, I didn’t mean to make it seem like it was. Changed.
Hi Ozzie,
For distributing aid, especially money, do you have any thoughts on allocation/fairness/gatekeeping?
This can be either in a personal sense, or more technical “mechanism design” sense.
This seems to be the main blocker to doing something scaled up and systematic.
It seems that personal networks and relationships work, but scaling this up beyond personal relationships leads to questions about abuse and moral hazard. People who claim to be EA to get money, for example.
My guess is that a serious question besides abuse is fairness. Who deserves it and how much?
Bob’s project which is mentioned in the comments, is one solution. However, his implementation timeline is unclear, but even if perfectly executed, it only helps a small set of earning to givers.
To be clear, I would personally be willing to bite the bullet (to be fair with not my own money) on some pretty aggressive schemes, but I think buy-in and optics play a role.
I think this is a serious question.
One big question is is this would be viewed more as a “community membership” thing or as a “directly impactful” intervention. I could imagine both being pretty different from one another.
I think personally I’m more excited by the second, because it seems more scalable.
The way I would view the “utilitarian intervention” version would be pretty intense, and much unlike almost all social programs, but it would be effective.
1. “Fairness” is a tricky word. The main thing that matters is who’s expected to produce value.
2. Many of the most valuable people are not EAs. Identifying these people and giving them support would be included. It could look like trying to find the most high-expected-value people globally, even if they have narrow online presences.
3. There would be pretty strict/disciplined measures for evaluating which individuals would represent a “good deal”. This would mean people would have rankings, maybe “predictions of impact”.
4. Maybe there would be “insurance” options, for people to have the feeling of stability (assuming this makes them more productive and risk-taking), even if help later on would in isolation be a net loss. (For example, funding after retirement)
I guess in some ways, this would be a very elite social program, for a very specific definition of “elite”.
Back to the “community membership” variant; one great thing about this is that maybe it could be mostly community-funded, and not in need of external funding. I imagine people in this camp would need to pay a lot of attention to find possible bad actors early and out them. It seems like a tough problem, but the solution space is large.
Another factor is that if people are willing to give up some privacy, then a lot of evaluation becomes easier, and gaming/abusing the system becomes harder.
Random comment: Do you or anyone else have any comments about the use of terminology with negative connotations, like “gatekeeping” or “elite”?
Background (unnecessary to read):
Basically I’ve been using the word “gatekeeping” a fair bit.
This word seems to be an accurate description of principled, prosocial activity to create functional teams or institutions. It includes activities no one finds surprising there is control over, such as grant making.
To see this another way, basically, someone somewhere (Party A) has given funding to achieve maximum impact for something (Party B), and we need people (Party C) to cause this to happen in some way. We owe Party A and B a lot, and that usually includes some sort of selection/control over party C.
Also, I think that “gatekeeping” seems particularly important in the early stages of founding a cause area or set of initiatives, where such activity seems necessary or has to occur by definition. In these situations, it seems less vulnerable to real or perceived abuse or at least insularity, at the same time it seems useful and virtuous to signpost and explain what gatekeeping is and what the parameters and intentions are.
However, gatekeeping is basically a slur in common use.
Now, “elite” has the same problem (“elitism”). It is also an important, genuine and technical thing to consider and signpost, but it can also be associated with real or perceived misuse.
Maybe it’s tenable if I use just “gatekeeping”. I’m worried if I start passing docs, posts or comments around, filled mention of both “gatekeeping” and “elites” and terms of art from who knows what else (from various disciplines, not just EA), it might offend or at least look insensitive.
I guess I can change the words with another.
However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example imposing cognitive/jargon costs on everyone.
I’m not sure if you have any thoughts. I thought I would write this because this seems like one of those things that needs input from others.
I definitely think it’s important to pay attention to language when a simple substitution can avoid issues. Maybe it’d be better to use the word “evaluation” or “stewardship” rather than “gatekeeping”?
“High-impact” might also be a good substitute for “elite”.
I would suggest using contentious words when substitutes would significantly impede communication or obscure the point being made, but otherwise being flexible.
Hi Ozzie,
This seems excellent and I learned a lot from this comment and your post.
I agree with the impactfulness argument you have made and its potential. It seems important in being much larger scale. It might even ease other types of giving into the community somehow (because you might develop a competent, strong institution). It’s also impactful, by design.
Also, as you suggest, finding very valuable, non-EA people to execute causes seems like a pure win [1].
Now, it seems I have a grant by a major funder of EA longtermism projects. Related to this, I am researching (or really just talking about) a financial aid project to what you described.
This isn’t approved or even asked for by the grant maker, but there seems to be some possibility it will happen. (But not more than a 50% chance though).
Your thoughts would be valuable and I might contact you.
I might copy and paste some content from the document into the above comment to get feedback and ideas.
[1] But finding and funding such people also seems difficult. My guess that people who do this well (e.g. Peter Thiel of Thiel Fellows) are established in related activities or connected, to an extraordinary degree. My guess is that this activity of finding and choosing people seems structurally similar to grant making, such as GiveWell. I think that successive grantmakers for alternate causes in EA have a mixed track record compared to the original. Maybe this is because the inputs are deceptively hard and somewhat illegible from the outside.
I’d be very keen to hear what you’re planning/provide feedback.
I have an objection to the idea (or at least some versions of it) that isn’t covered in the post. I don’t think my objection applies to advice/counseling/that sort of thing, but certainly does to money:
The process of building my own safety net, in itself, made me a lot more effective than I would have been otherwise. I went through some really rough times in my early career, including sleeping on a park bench for a month and seriously contemplating suicide after several rejections for jobs I was highly qualified for. To fix my situation, I had to confront some hard truths about how the world works, and about the limits of my ability to plan the future accurately, that I am not sure I ever would have confronted otherwise. I wound up in such extreme circumstances because I made risky career bets on independent projects, assumed they would work out and that those successes would lead to other successes that would solve all the problems created by not having a paying job or meaningful savings for several months while I pursued an independent project, and failed to contingency plan for either a failure of the project or for the project’s success to not immediately land me a paying job consistent with the skillset it demonstrated. I also had a bad tendency to blame all my failures on difficult circumstances or other people instead of thinking hard about what I could have done differently.
I have relatives in their 60s who still seem to think a deus ex machina will swoop in and rescue them from imprudent decisions as long as that’s what feels “fair” in that situation. I had definitely absorbed some of that attitude until confronted with the harsh reality that there was no one to rescue me except myself, and that I could do so only by making decisions with a clear eye toward their practical consequences and not based on my feelings about how the world “should” work. So I worry that in being the deus ex machina, even for extremely high EV people, you would risk reducing their EV by depriving them of an important skill-building opportunity.