As a not-student with self-funding who is looking for things to try, is this board also for me?
This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn’t being spent. I’ve also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum.
Increase fidelity for better delegation
In 2021 I still like to frame this as a principal-agent problem.
First of all there’s the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would just frame what they were doing as EA, and then they would keep doing what they were doing anyway.
This is not actually an unsolved problem if you look elsewhere in the world. Just look at your average company. Surely employees like to sugarcoat their work a bit, but we don’t often see a total departure from what their boss wants from them. Why not?Well I recently applied for funding to the EA meta fund. The project was a bit wacky, so we gave it a 20% chance of being approved. The rejection e-mail contained a whopping ~0.3 bits of information: “No”. It’s like that popular meme where a guy asks his girlfriend what she wants to eat, makes a lot of guesses, and she just keeps saying “no” without giving him any hints. So how are we going to find out what grantmakers want from us, if not by the official route? Perhaps this is why it seems so common for people close to the grantmaker to get funded: they do get to have high-fidelity communication.If this reads as cynicism, I’m sorry. For all I know, they’ve got perfect reasons for keeping me guessing. Perhaps they want me to generate a good model by myself, as a proof of competence? There’s always a high-trust interpretation and despite everything I insist on mistake theory.The subscription modelMy current boss talks to me for about an hour, about once a month. This is where I tell him how my work is going. If I’m off the rails somehow, this is where he would tell me. If my work was to become a bad investment for him, this is where he would fire me. I had a similar experience back when I was doing RAISE. Near the end, there was one person from Berkeley who was funding us. About once a month, for about an hour, we would talk about whether it was a good idea to continue this funding. When he updated away from my project being a good investment, he discontinued it. This finally gave me the high-fidelity information I needed to decide to quit. If not for him, who knows for how much longer I would have continued.So if I was going to attempt for a practical solution: train more grantmakers. Allow grantmakers to make exploratory grants unilaterally to speed things up. Fund applicants according to a subscription model. Be especially liberal with the first grant, but only fund them for a small period. Talk to them after every period. Discontinue funds as soon as you stop believing in their project. Give them a cooldown period between projects so they don’t leech off of you.
I have added a note to my RAISE post-mortem, which I’m cross-posting here:Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I’m quite surprised to find that the answer isn’t so much about more funding, more senior people to execute it, more time, etc. They’re simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn’t I, or anyone around me, think of this? I’m honestly perplexed. It’s worth thinking about.
You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy’s Law states, “no matter who you are, most of the smartest people work for someone else.”But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there’s an existing popular venue for crowdsourcing ideas, I’m even less willing to believe that that large EA foundations have simply missed a good opportunity.
You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy’s Law states, “no matter who you are, most of the smartest people work for someone else.”
But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there’s an existing popular venue for crowdsourcing ideas, I’m even less willing to believe that that large EA foundations have simply missed a good opportunity.
I would like to respond specifically to this reasoning.
Consider the scenario that a random (i.e. probably not EA-affiliated) genius comes up with an idea that is, as a matter of fact, high value.
Simplifying a lot, there are two possibilities here: X their idea falls within the window of what the EA community regards as effective, and Y it does not.
Probabilities for X and Y could be hotly debated, but I’m comfortable stating that the probability for X is less than 0.5. i.e. we may have a high success rate within our scope of expertise, but the share of good ideas that EA can recognize as good is not that high.
The ideas that reach Openphil via the EA community might be good, but not all good ideas make it through the EA community.
To me, reducing your weirdness is equivalent to defection in a prisoner’s dilemma, where the least weird person gets the most reward but the total reward shrinks as the total weirdness shrinks.Of course you can’t just go all-out on weirdness, because the cost you’d incur would be too great. My recommendation is to be slightly more weird than average. Or: be as weird as you perceive you can afford, but not weirder. If everyone did that, we would gradually expand the range of acceptable things outward.
Cause if there is excess funding and less applicants, I’d assume such applicants would also get funding.
I have seen examples of this at EA Funds, but it’s not clear to me whether this is being broadly deployed.
Let’s interpret “study” as broad as we can: is there not anything that someone can do on their own initiative, and do it better if they have time, that increases their leadership capacity?
I think the biggest constraint for having more people working on EA projects is management and leadership capacity. But those aren’t things you can (solely) self-study; you need to practice management and leadership in order to get good at them.
What about those people that already have management and leadership skills, but lack things like:
Connections with important actors
Awareness of the incentives and the models of the important actors
Awareness of important bottlenecks in the movement
Background knowledge as a source of legitimacy
Skin in the game / a track record as a source of legitimacy
If I take my best self as a model for leadership (which feels like a status grab but I’ll hope you excuse me, it’s the best data I have) then good leadership requires a lot of affinity/domain knowledge/vision/previous interactions with the thing that is being led. Can this not be cultivated?
There is also significant loss caused by moving to a different town, i.e. loss of important connections with friends and family at home, but we’re tempted not to count those.
I would train more grantmakers. Not because they’re necessarily overburdened but because, if they had more resources per applicant, they could double as mentors. I suspect there is a significant set of funding applicants that don’t meet the bar but would if they received regular high-quality feedback from a grantmaker.(like myself in 2019)
I’d recommend putting the airtable at the top of your post to make it the schelling point
What would it have taken to do something about this crisis in the first place? Back in 2008, central bankers were under the assumption that the theory of central banking was completely worked out. Academics were mostly talking about details (tweaking the tailor rule basically). The theory of central banking is already centuries old. What would it have taken for a random individual to overturn that establishment? Including the culture and all the institutional interests of banks etc? Are we sure that no one was trying to do exactly that, anyway?It seems to me that it would have taken a major crisis to change anything, and that’s exactly what happened. And now there are all kinds of regulations being implemented for posting collateral around swaps and stuff. It seems that regulators are fixing the issues as they come up (making the system antifragile), and I don’t see how a marginal young naive EA would have the domain knowledge to make a meaningful difference here.And that goes for most fields. Unless we basically invent the field (like AI Safety) or the strategy (like comparing charities), if the field is sufficiently saturated with smart and motivated people, I don’t think EA’s have enough domain knowledge to do anything. In most cases it takes decades of work to get anywhere.
I think your title could be a bit more informative.Holden’s writing seems to follow a hype cycle on the idea of transparency. i.e. first you apply a fresh new idea too radically, you run into it’s drawbacks, then you regress to a healthy moderate application of it.As someone who has felt some of the drawbacks of being outside this “inner ring”, I wouldn’t complain about the transparency per se. Lack of engagement, maybe, but that turned out to be me. I’m still waiting for concrete suggestions. I also think your project would be more fruitful if you interviewed these people in person and published the result.
Would removing the “crap” have been sufficient to make it polite? I like to be direct.
I can’t look inside your head, but if the mere thought of something makes you suffer, it probably means it reminds you of something that you are trying to ignore, i.e. trauma.
Assuming that this is indeed the case, I would further speculate that you are ignoring this memory or unpalatable insight because you subconsciously expect that thinking of it would disturb you to the point of getting in the way of whatever you would prefer to be doing, like idk, whatever your daily pursuits are.
The solution then, given these assumptions, would be to set aside some time (a week or two) to sit on a pillow and have nothing to do. This tends to bring unresolved trauma to the forefront by itself, simply because there is finally space for it.
Unfortunately you always find that there is more stuff to deal with, so this kind of spiritual work is a lifelong process (of getting progressively happier). I wholeheartedly recommend it.
(lots of downvotes, so where are all the comments?)I want to reward you for bringing up the topic of power dynamics in EA. Those exist, like in any community, but especially in EA there seems to be a strong current of denying the fact that EA’s are constrained by their selfish incentives like everyone else. It requires heroism to go against that current.But by just insinuating and not delivering any concrete evidence or constructive suggestions for change, you haven’t really done your homework. I advise you to withdraw this post, cut out half the narrative crap, add some evidence and a model, make a recommendation, then repost it.
What does “cancelling” mean, concretely? I don’t imagine the websites will be closed down. What will we lose?
I’ve been trying to figure out why cancel culture is so powerful. If only ~7% of people identify as pro social justice, why are social media platforms so freely bending to their will? Surely it’s not out of the goodness of their hearts, what is the commercial motive? I don’t buy the idea that it is simply a marketing stunt. Afaict a pro-SJ stance does not make a company look much more favorable at this point.
But then I found this:
For context, Facebook is the social media company that has been most reluctant to be political, and apparently this is really making them bleed financially.
Why are marketing people so willing to go out of their way to do “the right thing” instead of the profitable thing? Is this something cultural? Some more digging showed that the NAACP and the ADL are leading this charge of boycotting Facebook, but I don’t know what to make of that.