My understanding is that it’s currently focused on nonprofits (in large part because it’s much more logistically and legally complicated to send money to individuals)
Raemon
Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: “It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me.” Or “I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter.”
...
The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to rules like “try to behave in non-wasteful ways”, especially when the exception is personally beneficial. And I think each exception can weaken your broader narrative about what you value and who you are.
I was brought up in a family that was very pro-don’t-waste, and I’ve had an a lengthy shift towards “actually, ‘not wasting’” just isn’t very important. It’s more of a carry-over from a time when a) humanity had a lot less ability to produce stuff, b) humanity had worse landfill technology than we have now.”
Insofar as we do produce too much waste, it’s mostly at a corporate/organizational level than something that makes sense for individuals to prioritize.
It’s not that I think people should be making exceptions to rules like ‘try to behave in non-wasteful ways’, it’s that I mostly now think that ‘don’t be wasteful’ wasn’t that useful a core-rule in the first place.
(Among my cruxes here are a belief that landfill technology has improved since the era when ‘don’t waste’ and ‘recycle’ memes took off, as well as a shift towards ‘thinking broadly about having a high impact is much more important than individual local decisions.’
Past me (and perhaps you) might be suspicious of the ‘landfill technology is actually good enough that this isn’t that big a deal’, perhaps rightly so because it’s a kinda suspiciously-convenient belief. I don’t have arguments-at-the-ready that’d have convinced past me, so mostly just laying out my current reasoning without expecting it to be that persuasive at the moment)
Just wanted to say I super appreciated this writeup.
I suspect the goal here is less to deconfuse current EAs and more to make it easier to explain things to newcomers who don’t have any context.
(It also seems like good practice to me for people in leadership positions to keep people up to date about how they’re conceptualizing their thinking)
Quick note that if you set All Posts to “sort by new” instead of “sort by Daily” there’ll be 50 posts. (The Daily view is a bit weird because it varies a lot depending on forum traffic that week)
I don’t have much to contribute but I appreciated this writeup – I like it when EAs explore cause areas like this.
For the record I’m someone who works on the forum and thought the OP was expressed pretty reasonably.
Strong upvoted mostly to make it easier to find this comment.
The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they’re either young and lacking some core “figure out how to be helpful and actually help” skills, or they’re older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.
I think the *End* of the Middle of the funnel is more of where “volunteer at EA orgs” makes sense. And people in the Middle of the Middle who think they have the “figure out how to be helpful and help” property should do so if they’re self-motivated to. (If they’re not self motivated they’re probably not a good volunteer)
My claim is just that “volunteer at an org” is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn’t to say volunteers aren’t valuable, or that many EAs shouldn’t explore that as an option, or that better coordination tools to improve the situation shouldn’t be built.
But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said “huh, it looks like there should be all this free labor available by passionate people, can’t we connect these people with orgs that need volunteers?” and tried to build some kind of tool to help with that, it turned out that most people aren’t actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.
My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.
(Again, not arguing that ALLFED shouldn’t look for volunteers or that EAs shouldn’t volunteer at ALLFED, esp. if my experience doesn’t match yours. I’d encourage anyone reading this who’s looking for projects to give ALLFED volunteering a look.)
Membranes
A membrane is a semi-permeable barrier that things can enter and leave, but it’s a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.
An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge.
(I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members)
What happens inside the membrane?
First, you meet some basic standards for intelligence, good communication, etc. The basics you need in order to accomplish anything on purpose.
As noted elsewhere, I think EA needs to cultivate the skill of thinking (as well as gaining agency). There are a few ways to go about this, but all of them require some amount of “willing to put in extra effort and work.” Having a space where people have the expectation that everyone there is interested in putting that effort is helpful for motivation and persistence.
In time, you can develop conversation norms that foster better-than-average thinking and communication. (i.e. make sure that admitting you were wrong is rewarded rather than punished)
Membranes can work via two mechanisms:
Be more careful about who you let in, in the first place
Be willing to invest effort in giving feedback, or being willing to expel people from the group.
The first option is easier. Giving feedback and expelling people is quite costly, and painful both for the person being expelled from a group (who may have friends and roots there), as well as the person doing the expelling (which may involve a stressful fight with people second-guessing you).
If you’re much more careful about who you let in, an ounce of prevention can be more valuable than a pound of cure.
On the other hand, if you put up lots of barriers, you may find your community stagnating. There may also be false positives of “so-and-so seemed not super promising” but if you’d given them a chance to grow it would have been fine.
Notes from a “mini talk” I gave to a couple people at EA Global.
Local EA groups (and orgs, for that matter) need leadership, and membranes.
Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.
Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it’s worth people’s effort to overcome the barriers to entry, and/or maintain that barrier.
Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn’t scale. There are communities and movements that are designed such that there’s lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don’t think EA is one of them.
I’ve heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.
Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.
I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
What goals, though?
I didn’t write a top level post but I sketched out some of the relevant background ideas here. (I’m not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)
This was quite an interesting point I hadn’t considered before. Looking forward to reading more.