Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I’m sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
try anyway and fail badly, probably too badly for it to even be an educational failure.
fake it, probably without knowing they’re doing so
learned helplessness, possible systemic depression
be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you’d never started to the one where they had to rescue you.
discover more skills than they knew. feel great, accomplish great things, learn a lot.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or “get ambitious slowly”. Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures.
I claim EA’s emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago
What size of challenge is the right size? I’ve thought about this a lot and don’t have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules:
stick to problems where failure will at least be informative. If you can’t track reality well enough to know why a failure happened you definitely* need an easier project.
if your talk gives people a lot of ambitions to save the world/build billion dollar companies but their mind goes blank when they contemplate starting a freelancing business, the ambition is fake.
Hmm, I personally think “discover more skills than they knew. feel great, accomplish great things, learn a lot” applies a fair amount to my past experiences, and I think aiming too low was one of the biggest issues in my past, and I think EA culture is also messing up by discouraging aiming high, or something.
I think the main thing to avoid is something like “blind ambition”, where your plan involves multiple miracles and the details are all unclear. This seems also a fairly frequent phenomenon.
I think that you in particular might be quite non-representative of EAs in general, in terms of “success” in the EA context. If I imagine a distribution of “EA success,” you are probably very far to the right.
Accepting your self-report as a given, I have a bunch of questions.
I want to say that I’m not against ambition. From my perspective I’m encouraging more ambition, by focusing on things that might actually happen instead of daydreams.
Does the failure mode I’m describing (people spinning their wheels on fake ambition) make sense to you? Have you seen it?
I’m really surprised to hear you describe EA as discouraging aiming high. Everything I see encourages aiming high, and I see a bunch of side effects of aiming too high littered around me. Can you give some examples of what you’re worried about?
What do you think would have encouraged more of the right kind of ambition for you? Did it need to be “you can solve global warming?”, or would “could you aim 10x higher?” be enough?
I’m a bit confused about this because “getting ambitious slowly” seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already “ambitious”; unless you’re really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn’t work to say I’m going to get ambitious slowly.
What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.
Does what I said here and here answer this? the goal isn’t “put the breaks on internally motivated ambition”, it’s “if you want to get unambitious people to do bigger projects, you will achieve your goal faster if you start them with a snowball rather than try to skip them straight to Very Big Plans”.
I separately think we should be clearer on the distinction between goals (things you are actively working on, have a plan with concrete next steps and feedback loops, and could learn from failure) and dreams (things you vaguely aspire and maybe are working in the vicinity of, but no concrete plans). Dreams are good, but the proper handling of them is pretty different from that of goals.
I also liked this quote from Obama on a similar theme. The advice is pretty common for very good reasons but hearing it from former POTUS had more emotional strength on me: ”how do we sustain our own sense of hope, drive, vision, and motivation? And how do we dream big? For me, at least, it was not a straight line. It wasn’t a steady progression. It was an evolution that took place over time as I tried to align what I believed most deeply with what I saw around me and with my own actions.
(...)
The first stage is just figuring out what you really believe. What’s really important to you, not what you pretend is important to you. And what are you willing to risk or sacrifice for it? The next phase is then you test that against the world, and the world kicks you in the teeth. It says, “You may think that this is important, but we’ve got other ideas. And who are you? You can’t change anything.”
Then you go through a phase of trying to develop skills, courage, and resilience. You try to fit your actions to the scale of whatever influence you have. I came to Chicago and I’m working on the South Side, trying to get a park cleaned up or trying to get a school improved. Sometimes I’m succeeding, a lot of times I’m failing. But over time, you start getting a little bit of confidence with some small victories. That then gives you the power to analyze and say, “Here’s what worked, here’s what didn’t. Here’s what I need more of in order to achieve the vision or the goals that I have.” Now, let me try to take it to the next level, which means then some more failure and some more frustration because you’re trying to expand the orbit of your impact.
I think it’s that iterative process. It’s not that you come up with a grand theory of “here’s how I’m going to change the world” and then suddenly it all just goes according to clockwork. At least not for me. For me, it was much more about trying to be the person I wanted to believe I was. And at each phase, challenging myself and testing myself against the world to see if, in fact, I could have an impact and make a difference. Over time, you’ll surprise yourself, and it turns out that you can.”
The problem with this advice is that many people in EA don’t think we have enough time to slowly build up. If you think AI might take control of the future within the next 15 years, you don’t have much time to build skills in the first half of your career and exercise power after you have 30 years of experience. There is an extreme sense of urgency, and I am not sure what’s the right response.
“we don’t have time” is only an argument for big gambles if they work. If ambition snowballs work better, then a lack of time is all the more reason not to waste time with vanity projects whose failures won’t even be educational.
I could steel man this as something of a lottery, where n% of people with way-too-big goals succeed and those successes are more valuable than the combined cost of the failures. I don’t think we’re in that world, because I think goals in the category I describe aren’t actually goals, they’re dreams, and by and large can’t succeed.
You could argue that’s defining myself into correctness and some big goals are genuinely goals even if they pattern match my criteria like “failure is uninformative” and “contemplating a smaller project is scary or their mind glances off the option (as opposed to being rejected for being too small)”. I think that’s very unlikely to be true for my exact critieria, but agree that in general overly broad definitions of fake ambition could do a lot of damage. I think creating a better definition people can use to evaluate their own goals/dreams is useful for that exact reason.
I also think that even if there are a few winning tickets in that lottery- people pushed into way-too-big projects that succeed- there aren’t enough of them to make a complete problem-solving ecosystem. The winning tickets still need staff officers to do the work they don’t have time for, or require skills inimical to swinging for the fences.
I should note that my target audience here is primarily “people attempting to engender ambition in others”, followed by “the people who are subject to those attempts”. I think engendering fake ambition is actively harmful, and the counterfactual isn’t “30 years in a suit”, it’s engendering ambition snowballs that lead to more real projects. I don’t think discouraging people who are naturally driven to do much-too-big projects is helpful.
I’d also speculate that if you tell a natural fence-swinger to start an ambition snowball, they end up at mind-bogglingly ambitious quickly, not necessarily slower than if you’d pushed them directly to dream big. Advice like “Do something that’s scary but at least 80% tractable” scales pretty well across natural ambition levels.
I think that people should break down their goals, no matter how easy they seem, into easier and smaller steps, especially if they feel lazy. Laziness appears when we feel like we need to do tasks that seem unecessary for us, even when we know that they’re necessary. One reason why they appear unecessary is their difficulty of achievement. Why exercise for 30 minutes per day if things are “fine” without that? As such, one way to deal with that is by taking whatever goal you have and breaking it down into a lot of easy steps. As an example, imagine that you want to write the theoretical part of your thesis. So, you could start by writing what is the topic, what questions you might want to research, what key uncertainties you have about those questions, then you search for papers in order to clarify those uncertainties, and so on, immediate step by step, until you finish your thesis. If a step seems difficult, break it down even more. That’s why I think that breaking down your goals into smaller and easier steps might help when you feel lazy.
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post mentioned a lot of people and organizations, so it seemed like useful data.
I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic. This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.
Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.
It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time
Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k’s mention was negative.
I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I’m very glad I solicited comments, and found the process easier than expected.
Nice, thanks for keeping track of this and reporting on the data!! <3
No pressure to respond, but I’m curious how long it took you to find the relevant email addresses, send the messages, then reply to all the people etc.? I imagine for me, the main costs would probably be in the added overhead (time + psychological) of having to keep track of so many conversations.
Off the top of my head: in maybe half the cases I already had the contact info. In one or two cases cases one of beta readers passed on the info. For the remainder it was maybe <2m per org, and it turns out they all use info@domain.org so it would be faster next time.
There’s a thing in EA where encouraging someone to apply for a job or grant gets coded as “supportive”, maybe even a very tiny gift. But that’s only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn’t a natural fit for, because “it’s quick and there are few applicants”. This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn’t the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I’m not mad at you personally].
I’ve been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone “yeah you’re probably not good enough”.
A lot of EA job postings encourage people to apply even if they don’t think they’re a good fit. I expect this is done partially because orgs genuinely don’t want to lose great applicants who underestimate themselves, and partially because it’s an extremely cheap way to feel anti-elitist.
I don’t know what the solution is here. Many people are miscalibrated on their value or their competition, all else being equal you do want to catch those people. But casting wider net entails more bycatch.
It’s hard to accuse an org of being mean to someone who they encouraged to apply for a job or grant. But I think that should be in the space of possibilities, and we should put more emphasis on invitations to apply for jobs/grants/etc being clear, and less on welcoming. This avoids wasting the time of people who were predictably never going to get the job.
I think this falls into a broader class of behaviors I’d call aspirational inclusiveness.
I do think shifting the relative weight from welcoming to clear is good. But I’d frame it as a “yes and” kind of shift. The encouragement message should be followed up with a dose of hard numbers.
Something I’ve appreciated from a few applications is the hiring manager’s initial guess for how the process will turn out. Something like “Stage 1 has X people and our very tentative guess is future stages will go like this”.
Scenarios can also substitute in areas where numbers may be misleading or hard to obtain. I’ve gotten this from mentors before, like here’s what could happen if your new job goes great. Here’s what could happen if your new job goes badly. Here’s the stuff you can control and here’s the stuff you can’t control.
Something I’ve tried to practice in my advice is giving some ballpark number and reference class. I tell someone they should consider skilling up in hard area or pursuing competitive field, then I tell them I expect success in <5% of people I give the advice to, and then say you may still want to do it because of certain reasons
Yes, it’s all very noisy. But numbers seem far far better than expecting applicants to read between the lines on what a heartwarming message is supposed to mean, especially early-career folks who would understandably assign a high probability of success with it
One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn’t do X than telling them they should (including a recent incidence which was rather personally costly). And I think I’m much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can’t be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it’s easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I’m talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who’ve told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That’s even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they’ve conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another’s entire career trajectory are themselves likely overconfident.
Another example is AI safety. I’ve talked to dozens of aspiring AI safety researchers who’ve felt very discouraged
An illusory consensus thrust upon them that their work was essentially worthless because it didn’t superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I’ve met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of ‘follow the leader’ not even the “leaders” would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn’t
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, “lol, turns out we didn’t know what we were doing with alignment the whole time, we’re definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI.” I feel vindicated in my skepticism of the quality of the judgement of many of our peers.
Thanks for this post, as I’ve been trying to find a high-impact job that’s a good personal fit for 9 months now. I have noticed that EA organizations use what appears to be a cookie-cutter recruitment process with remarkable similarities across organizations and cause areas. This process is also radically different from what non-EA nonprofit organizations use for recruitment. Presumably EA organizations adopted this process because there’s evidence behind its effectiveness but I’d love to see what that evidence actually is. I suspect it privileges younger, (childless?) applicants with time to burn, but I don’t have data to back up this suspicion other than viewing the staff pages of EA orgs.
Can you say more about cookie-cutter recruitment? I don’t have a good sense of what you mean here.
I think solving this is tricky. I want hiring to be efficient, but most ways hiring orgs can get information take time, and that’s always going to be easier for people with more free time. I think EA has an admirable norm of paying for trials and deserves a lot of credit for that.
One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying—this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there’s a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.
Another solution I’ve been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is very very quick (eg “drop in your Linkedin profile and 2 sentences about why you’re a good fit”).
I’d be interested in seeing some organizations try out the very very quick method. Heck, I’d be willing to help set it up and trial run it. My rough/vague perception is that a lot of the information in a job application is superfluous.
I also remember Ben West posting some data about how a variety of “how EA is this person” metrics held very little predictive value in his own hiring rounds.
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well.
One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I’m much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that’s with them.Relative to the rest of the world they’re much less likely to take investigating other options personally.
Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it’s with them). But trials are so much more informative than interviews, I can’t imagine hiring for or accepting a long-term job without one.
Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people.
EA also has an admirable norm of paying for trials, which no one does for interviews.
The impression I get from the interview paradigm vs work trial paradigm is: so much of today’s civilization is less than 100 years old, and really big transformations happen during each decade. The introduction of work trials is one of those things.
Two popular responses to FTX are “this is why we need to care more about honesty” and “this is why we need to not do weird/sketchy shit”. I pretty strongly believe the former. I can see why people would believe the latter, but I worry that the value lost is too high.
But I think both side can agree that representing your weird/sketchy thing as mundane is highly risky. If you’re going to disregard a bunch of the normal safeguards of operating in the world, you need to replace them with something, and most of those somethings are facilitated by honesty.
None of my principled arguments against “only care about big projects” have convinced anyone, but in practice Google reorganized around that exact policy (“don’t start a project unless it could conceivably have 1b+ users, kill if it’s ever not on track to reach that”) and they haven’t grown an interesting thing since.
My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.
the policy was commonly announced when I worked at google (2014), I’m sure anyone else who was there at the time would confirm its existence. In terms of “haven’t grown anything since”, I haven’t kept close track but can’t name one and frequently hear people say the same.
I like the Google Pixels. Well specifically I liked 2 and 3a but my current one (6a) is a bit of a disappointment. My house also uses Google Nest and Chromecast regularly. Tensorflow is okay. But yeah, overall certainly nothing as big as Gmail or Google Maps, never mind their core product.
Google was producing the Android OS and its own flagship phones well before the Pixel, so I consider it to predate my knowledge of the policy (although maybe the policy started before I got there, which I’ve now dated to 4/1/2013)
Please send me links to posts with those arguments you’ve made, as I’ve not read them, though my guess would be that you haven’t convinced anyone because some of the greatest successes in EA started out so small. I remember the same kind of skepticism being widely expressed some projects like that.
Rethink Priorities comes to mind as one major example. The best example is Charity Entrepreneurship. It was not only one of those projects with potential scalability that was doubted. It keeps incubating successful non-profit EA startups for across almost every EA-affiliated cause. CE’s cumulative track record might the best empirical argument against the broad applicability to the EA movement of your own position here.
Your comment makes the most sense to me if you misread my post and are responding to exactly the opposite of my position, but maybe I’m the one misreading you.
People talk about running critical posts by the criticized person or org ahead of time, and there are a lot of advantages to that. But the plans I’ve seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
What I’d like to see is some reciprocal obligation from recipients of criticism, especially formal organizations with many employees. Things like answering questions from potential critics very early in the process, with a certain level of speed and reliability. Right now it feels like orgs are very fast to respond to polished, public posts, but you can’t count on them to even answer questions. They’ll respond quickly to public criticism, and maybe even to polished posts sent to them before publication, but they are not fast or reliable at answering questions with implicit potential criticism behind them. Which is a pretty shitty deal for the critic, who I’m sure would love to find out their concern was unmerited before spending dozens of hours writing a polished post.
This might be unfair. I’m quite sure it used to be true, but a lot of the orgs have professionalized over the years. In which case I’d like to ask they make their commitments around this public and explicit, and share them in the same breath that they ask for heads up on criticism.
But the plans I’ve seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
I see a pretty important benefit to the critic, because you’re ensuring that there isn’t some obvious response to your criticisms that you are missing.
I once posted something that revised/criticized an Open Philanthropy model, without running it by anyone there, and it turned out that my conclusions were shifted dramatically by a coding error that was detected immediately in the comments.
That’s a particularly dramatic example that I don’t expect to generalize, but often if a criticism goes “X organization does something bad” the natural question is, why do they do that? Is there a reason that’s obvious in hindsight that they’ve thought about a lot, but I haven’t? Maybe there isn’t, but I would want to run a criticism by them just to see if that’s the case.
I don’t think people are obligated to build in the feedback they get extensively if they don’t think it’s valid/their point still stands.
I don’t have any disagreement with getting people information early, I just think characterizing the current system as one where only the criticizee benefits is wrong.
A few benefits I see to the critic even in the status quo:
The post generally ends up stronger, because it’s more accurate. Even if you only got something minor wrong, readers will (reasonably!) assume that if you’re not getting your details right then they should pay less attention to your post.
To the extent that the critic wants the public view to end up balanced and isn’t just trying to damage the criticizee, having the org’s response go live at the same time as the criticism helps.
If the critic does get some things wrong despite giving the criticizee the opportunity to review and bring up additional information, either because the criticizee didn’t mention these issues or refused to engage, the community would generally see it as unacceptable for the crtiticizee to sue the critic for defamation. Whereas if a critic posts damaging false claims without that (and without a good reason for skipping review, like “they abused me and I can’t sanely interact with them”) then I think the law is still on the table.
A norm where orgs need to answer critical questions promptly seems good on it’s face, but I’m less sure in practice. Many questions take far more effort to answer well than they do to pose, especially if they can’t be answered from memory. Writing a ready-to-go criticism post is a way of demonstrating that you really do care a lot about the answer to this question, which might be needed to keep the work in answering not-actually-that-important questions down? But there could be other ways?
You’re not wrong, but I feel like your response doesn’t make sense in context.
The post generally ends up stronger, because it’s more accurate
Handled vastly better by being able to reliably get answers about concerns earlier.
To the extent that the critic wants the public view to end up balanced and isn’t just trying to damage the criticizee
Assumes things are on a roughly balanced footing and unanswered criticism pushes it out of balance. If criticism is undersupplied for large orgs, making it harder makes things less balanced (but rushed or bad criticism doesn’t actually fix this, now you just have two bad things happening)
If the critic does get some things wrong despite giving the criticizee the opportunity to review and bring up additional information, either because the criticizee didn’t mention these issues or refused to engage, the community would generally see it as unacceptable for the crtiticizee to sue the critic for defamation
I’m asking the potential criticizee to provide that information earlier in the process.
A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them.
The main list
The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.
Faunalytics is a data analysis firm focused on metrics related to animal suffering. I searched high and low for health data on vegans that included ex-vegans, and they were the only place I found anything that had any information from ex-vegans. They shared their data freely and offered some help with formatting, although in the end it was too much work to do my own analysis.
I do think their description minimized the problems they found. But they shared enough information that I could figure that out rather than relying on their interpretation, and that’s good enough.
EA is trend-following to unfortunate degrees. ALLFED picked the important but unsexy target of food security during catastrophes, and has been steadfastly pursuing it for 7 years.
CE runs a bootcamp/incubator that produces several new charities each run. I don’t think every project that comes out of this program is gold. I don’t even know of any projects that make me go “yes, definitely amazing”. But they are creating founder-talent where none existed before, and getting unsexy projects implemented, and building up their own skill in developing talent.
Exotic Tofu Project, perhaps dependent on getting a more charismatic co-founder
I was excited by George Stiffman’s original announcement of a plan to bring unknown Chinese tofus into the west and marketing it to foodie omnivores as desirable and fun. This was a theory of change that could work, and that avoided the ideological sirens that crash so many animal suffering projects on the rocks.
Then he released his book. It was titled Broken Cuisine: Chinese tofu, Western cooking, and a hope to save our planet. The blurb opens with “Our meat-based diets are leading to antibiotic-resistant superbugs, runaway climate change, and widespread animal cruelty. Yet, our plant-based alternatives aren’t appealing enough. This is our Broken Cuisine. I believe we must fix it.” . This is not a fun and high status leisure activity: this is doom and drudgery, aimed at the already-converted. I think doom and drudgery strategies are oversupplied right now, but I was especially sad to see this from someone I thought understood the power of offering options that were attractive on people’s own terms.
This was supposed to be a list of projects who are underfunded due to lack of charisma; unfortunately this project requires charisma. I would still love to see it succeed, but I think that will require a partner who is good at convincing the general public that something is high-status. My dream is a charming, high-status reducetarian or ameliatarian foodie influencer, but just someone who understood omnivores on their own terms would be an improvement.
I love impact certificates as a concept, but they’ve yet to take off. They suffer from both lemon and coordination problems.They’re less convenient than normal grants, so impact certificate markets only get projects who couldn’t get funding elsewhere. And it’s a 2 or 3 sided market (producers, initial purchasers, later purchasers).
There are a few people valiantly struggling to make impact certificates a thing. I think these are worth funding directly, but it’s also valuable to buy impact certificates. If you don’t like the uncertainty of the project applications you can always be a secondary buyer, who are perhaps even rarer.
Projects I know doing impact certificates
Manifund. Manifund is the IC subproject of Manifold Market, which manifestly does not suffer from lack of extroversion in its founder. But impact certs are just an uphill battle, and private conversations with founder Austin Chen indicated they had a lot of room for more funding.
ACX Grants runs an impact certificate program, managed by Manifund.
Oops, the primary project I was thinking of has gone offline.
Full disclosure: I know Ozzie socially, although not so well as to put him in the Conflicts of Interest section.
Similar to ALLFED: I don’t know that QURI’s estimation tools are the most important project, but I do know Ozzie has been banging the drums on forecasting for years, way before Austin Chen made it cool, and it’s good for the EA ecosystem to have that kind of persistent pursuit in the mix.
Community Building
Most work done under the name “community building” is recruiting. Recruiting can be a fine thing to do, but it makes me angry to see it mislabeled in this, while actual community building starves. Community recruiting is extremely well funded, at least for people willing to frame their project in terms of accepted impact metrics. However if you do the harder part of actually building and maintaining a community that nourishes members, and are uncomfortable pitching impact when that’s not your focus, money is very scarce. This is a problem because
EA has an extractive streak that can burn people out. Having social support that isn’t dependent on validation for EA authorities is an important counterbalance.
The people who are best at this are the ones doing it for its own sake rather than optimizing for short-term proxies on long-term impact. Requiring fundees to aim at legible impact selects for liars and people worse at the job.
People driven to community build for its own sake are less likely to pursue impact in other ways. Even if you think impact-focus is good and social builders are not the best at impact, giving them building work frees up someone impact-focused to work on something else.
Unfortunately I don’t have anyone specific to donate to, because the best target I know already burnt out and quit. But I encourage you to be on the lookout in your local community. Or be the change I want to see in the world: being An Organizer is hard, but hosting occasional movie nights or proactively cleaning at someone else’s party is pretty easy, and can go a long way towards creating a healthy connected community.
Projects with conflicts of Interest
The first section featured projects I know only a little about. This section includes projects I know way too much about, to the point I’m at risk of bias.
Independent grant-funded researchers
Full disclosure: I am an independent, sometimes grant-funded, researcher.
This really needs to be its own post, but in a nutshell: relying on grants for your whole income sucks, and often leaves you with gaps or at least a lot of uncertainty. I’m going to use myself as an example because I haven’t run surveys or anything, but I expect I’m on the easier end of things.
The core difficulty with grant-financing: grantmakers don’t want to pay too far in advance. Grantmakers don’t want to approve new grants until you’ve shown results from the last grant. Results take time. Grant submissions, approval, and payout also take time. This means that, at best, you spend many months not knowing if you’ll have a funding gap, and many times the answer will be yes. I don’t know if this is the grantmaker’s fault, but many people feel pressure to ask for as little money as possible, which makes the gaps a bigger hardship
I get around this by contracting and treating grants as one client out of several, but I’m lucky that’s an option. It also means I spend time on projects that EA would consider unoptimal. Other problems: I have to self-fund most of my early work because I don’t want to apply for a grant until I have a reasonable idea of what I could hope to accomplish. There are projects I’ve been meaning to do for years, but are too big to self-fund, and too illegible and inspiration-dependent to survive a grant process. I have to commit to a project at application time but then not start until the application is approved, which could be months later.
All purpose funding with a gentle reapplication cycle would let independents take more risks at a lower psychological toll. Or test out Austin Chen’s idea of ~employment-as-a-service. Alas neither would help me right this second- illness has put me behind on some existing grant funded work, so I shouldn’t accept more money right now. But other independents could; if you know of any, please leave a pitch in the comments.
full disclosure: I volunteer and am very occasionally paid for work at Lightcone, and have deep social ties with the team.
Lightcone’s issue isn’t so much charisma as that the CEO is allergic to accepting money with strings, and the EA-offered money comes with strings. I like Lightcone’s work, and some of my favorite parts of their work would have been much more difficult without that independence.
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we’ve been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
i’ve been working at manifund for the last couple months, figured i’d respond where austin hasn’t (yet)
here’s a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we’re raising for.
tldr of that application:
core ops
staff salaries
misc things (software, etc)
programs like regranting, impact certificates, etc, for us to run how we think is best[1]
additionally, if a funder was particularly interested in a specific funding program, we’re also happy to provide them with infrastructure. e.g. we’re currently facilitating the ACX grants, we’re probably (70%) going to run a prize round for dwarkesh patel, and we’d be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn’t really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren’t manifund].
i’ll also add that we’re a less funding-crunched than when austin first commented; we’ll be running another regranting round, for which we’ll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.)
i’m keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we’re tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast.
I probably would have had ALLFED and CE on a list like this had I written it (don’t know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren’t donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven’t researched any of the projects, and I’m definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn’t take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here’s why I personally haven’t donated to these projects in 2023, starting from the 2 you mention.
I’m not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to go right for research to be impactful:
The research needs to find “surprising”/”new” impactful interventions (or show that existing top interventions are surprisingly less cost-effective)
The research needs to be reliable and generally high quality
The research needs to be influential and decision-relevant for the right actors.
It’s really hard to evaluate each of the three as a non-expert. I would also be surprised if this was particularly neglected, as ALLFED is very famous in EA, and Denkenberger seems to have a good network. I also don’t know what more funding would lead to, and their track record is not clear to me after >6 years (but that is very much my ignorance; and because evaluating research is hard)
They’re possibly my favourite EA org (which is saying a lot; the bar is very high). I recommended allocating $50k to CE when I won a donor lottery. But because they’re so obviously cost-effective, if they ever have a funding need, I imagine tons of us would be really eager to jump in and help fill it. Including e.g. the EAIF. So, I personally would consider a donation to CE as counterfactually ~similar to a donation to the EAIF.
Regarding CE-incubated projects, I do donate a bit to them, but I personally believe that some of the medium-large donors in the CE seed network are very thoughtful and experienced grantmakers. So, I don’t expect the unfunded projects to be the most promising CE projects. Some projects like Healthier Hens do scale down due to lack of funding after some time, but I think a main reason in that case was that some proposed interventions turned out to not work or cost more than they expected. See their impact estimates.
Faunalytics:
They are super well known and have been funded by OpenPhil and the EA Animal Welfare Fund for specific projects, I defer to them. While they have been a ACE recommended charity for 8 years, I don’t know if the marginal dollar has more impact there compared to the other extremely impressive animal orgs.
Exotic Tofu:
It seems really hard to evaluate, Elizabeth mentions some issues, but in general my very uninformed opinion is that if it wouldn’t work as a for-profit it might be less promising as a non-profit compared to other (exceptional) animal welfare orgs.
Impact Certificates:
I think the first results weren’t promising, and I fear it’s mostly about predicting the judges’ scores since it’s rare to have good metrics and evaluations. That said, Manifund seems cool, and I made a $12 offer for Legal Impact for Chickens to try it out.[1] Since you donate to them and have relevant specific expertise, you might have alpha here and it might be worth checking out
Edit: see the object-level response from Ozzie; the above is somewhat wrong and I expect other points about other orgs to be wrong in similar ways
Community Building:
I’m personally unsure about the value of non-impact-oriented community building. I see a lot of events like “EA Karaoke Night”, which I think are great but:
I’m not sure they’re the most cost-effective way to mitigate burnout
I think there are very big downsides in encouraging people to rely on “EA” for both social and economic support
I worry that “EA” is getting increasingly defined in terms of social ties instead of impact-focus, and that makes us less impactful and optimize for the wrong things (hopefully, I’ll write a post soon about this. Basically, I find it suboptimal that someone who doesn’t change their career, donate, or volunteer, but goes to EA social events, is sometimes considered closer to the quintessential “EA” compared to e.g. Bill Gates)
Independent grant-funded researchers:
See ALLFED above for why it’s hard for me to evaluate research projects, but mostly I think this obviously depends a lot on the researcher. But I think the point is about better funding methodology/infrastructure and not just more funding.
Lightcone:
I hear conflicting things about the dynamics there (the point about “the bay area community”). I’m very far from the Bay Area, and I think projects there are really expensive compared to other great projects. I also thought they had less of a funding need nowadays, but again I know very little.
Please don’t update much on the above in your decisions on which projects to fund. I know almost nothing about most of the projects above and I’m probably wrong. I also trust grantmakers and other donors have much more information, experience, and grantmaking skills; and that they have thought much more about each of the orgs mentioned. This is just meant to be an answer to “Does anyone have a good explanation for the praise-funding mismatch?” that basically is a bunch of guessed examples for: “many things can be very praise-worthy without being a great funding opportunity for many donors”
But I really don’t expect to have more information than the AWF on this, and I think they’ll be the judge, so rationally, I should probably just have donated the money to the AWF. I think I’m just not the target audience for this.
We’ve spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we’ve focused on Squiggle)
Regarding users, I’d agree it’s not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you’ll see several EA groups who have used Squiggle.
We’ve been working with a few EA organizations on Squiggle setups that are mostly private.
Of course! In general I’m happy for people to make quick best-guess evaluations openly—in part, that helps others here correct things when there might be some obvious mistakes. :)
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I consider the possibility that a lot of ALLFED’s potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2⁄3 majority vote), one with this kind of potential impact model might do very well!
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don’t have much data to support this.
I don’t think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It’s plausible to me that an org in that position might be unusually good at rating highly on many donors’ charts, without being unusually good at rating at the very top of the donors’ lists:
There’s no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors’ scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell’s top charities.
Maximise nearterm human welfare in a robust way, one had better donate to GiveWell’s funds.
I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell’s funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
CEARCH estimated “the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity”. “The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse”. I guess CEARCH is overestimating cost-effectiveness (see my comments).
My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCHestimatingthat lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell’s top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that of GiveWell’s top charities.
As always (unless otherwise stated), the views expressed here are my own, not those of ALLFED.
I’m wrong and they’re not outstanding orgs, but discovering that takes work the praisers haven’t done.
The praise is a way to virtue signal, but people don’t actually put their money behind it.
The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
I’m not sure of CE’s funding situation, it was the incubated orgs that they pitched as high-need.
Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
OpenPhil has a particular grant cycle, maybe it doesn’t work for some orgs (at least not as their sole support).
On exotic tofu: I am not yet convinced that Stiffman doesn’t have the requisite charisma. Is your concern that he’s vegan (hence less relatable to non-vegans), his messaging in Broken Cuisine specifically, or something else? I am sympathetic to the first concern, but not as convinced by the second. In particular, from what little else I’ve read from Stiffman, his messaging is more like his original post on this Forum: positive and minimally doom-y. See, for example, his article in Asterisk, this podcast episode (on what appears to be a decently popular podcast?), and his newsletter.
Have you reached out to him directly about your concerns about his messaging? Your comments seem very plausible to me and reaching out seems to have a high upside.
I sent a message to George Stiffman through a mutual friend and never heard back, so I gave up after 2 pings (to the friend).
Thanks for mentioning places Stiffman comes across better. I’ve read the Asterisk article and found it irrelevant to his consumer-aimed work. Maybe the Bittman podcast is consumer-targeted and an improvement, I dunno. For now I can’t get over that book title and blurb.
Not well. I only have snippets of information, and it’s private (Habryka did sign off on that description).
I don’t know if this specifically has come up in regards to Lightcone or Lighthaven, but I know Haybrka has been steadfastly opposed to the kind of slow, cautious, legally-defensive actions coming out of EVF. I expect he would reject funding that demanded that approach (and if he accepted it, I’d be disappointed in him, given his public statements).
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has happened according to certain standards and then pay out the impact credits (or carbon credits) associated with that standard.
That system seems more promising to us as it has all the advantages of impact certificate markets but also the advantage that one party (e.g., us) can do the legal battle in the US once for this impact credit (and can even rely on the precedent of carbon credits), and thereby pave the ground for all the other market participants that come after and don’t have to worry about the legalities anymore. There are already a number non-EA organizations that are working toward a similar vision.
Even outside such restrictive jurisdictions as the US, this system has the advantage that it allows for deeper liquidity on the impact credit markets (compared to the auctions for individual impact certificates). But the US is an important market for EA and AI safety, so we couldn’t just ignore it even if it hadn’t been for this added benefit.
We’ve started bootstrapping this system with GiveWiki in January of last year. But over the course of the year we’ve found it very hard to find anyone who wanted to use the system as a donor/grantmaker. Most of the grantmakers we were in touch with had lost their funding in Nov. 2022; others wanted to wait until the system is mature; and many smaller donors had no trouble finding great funding gaps without our help.
We will keep the platform running, but we’ll probably have wait for the next phase of funding overhang when there are more grantmakers and they actually have trouble finding their funding gaps.
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn’t have to register as a broker-dealer to be allowed to match startups to investors. I think they have some funds that are led by seasoned investors, and then the newbie investors can follow the seasoned ones by investing into their funds. Or some mechanism of that sort.
We’re probably not getting a no-action letter, and we don’t have the money yet to start the legal process to get our impact credits registered with the CFTC. So instead we recognized that in the above example investors are treating valuations basically like scores. So we’re just using scores for now. (Some rich people say money is just for keeping score. We’re not rich, so we use scores directly.)
The big advantage of actual scores (rather than using monetary valuations like scores) is that it’s legally easy. The disadvantage is that we can’t pitch GiveWiki to profit-oriented investors.
So unlike AngelList, we’re not giving profit-oriented investors the ability to follow more knowledgeable profit-oriented investors, but we’re allowing donors/grantmakers to follow more knowledgeable donors/grantmakers. (One day, with the blessing of the CFTC, we can hopefully lift that limitation.)
We usually frame this as a process of three phases:
Implement the equivalent of price discovery with a score. (The current state of GiveWiki.)
Pay out a play money currency according to the score.
Turn the play money currency into a real impact credit that can be sold for dollars (with the blessing of the CFTC).
Complaints about lack of feedback for rejected grants are fairly frequent, but it seems relevant that I can’t get feedback for my accepted grants or in-progress work. The most I have ever gotten was a 👍 react when I texted them “In response to my results I will be doing X instead of the original plan on the application”. In fact I think I’ve gotten more feedback on rejections than acceptances (or in one case, I received feedback on an accepted grant, from a committee member who’d voted to reject). Sometimes they give me more money, so it’s not that the work is so bad it’s not worth commenting on. Admittedly my grants are quite small, but I’m not sure how much feedback medium or even large projects get.
Acceptance feedback should be almost strictly easier to give, and higher impact. You presumably already know positives about the grant, the impact of marginal improvements is higher in most cases, people rarely get mad about positive feedback, and even if you share negatives the impact is cushioned by the fact that you’re still approving their application. So without saying where I think the line should be, I do think feedback for acceptances is higher priority than for rejections.
A relevant question here is “what would I give up to get that feedback?”. This is very sensitive to the quality of feedback and I don’t know exactly what’s on offer, but… I think I’d give up at least 5% of my grants in exchange for a Triplebyte-style short email outlining why the grant was accepted, what their hopes are, and potential concerns.
I have had that experience too. It seems grant work is pretty independent. I think it is worth emphasizing that even though you might not get much except a thumbs up, it is important to inform the grantmakers about changes in plans. Moreover, I think your way of doing it as a statement instead of as a question is a good strategy. I have also included something along the lines of “if you have concerns, questions or objections about my proposed change of plan, please contact me asap” so you firmly place the ball in the grantmakers’ court and that it seems fair to interpret a lack of response as an endorsement of your proposed changes.
As of October 2022, I don’t think I could have known FTX was defrauding customers.
If I’d thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so. I think I failed in an important way here, but I also don’t think my failure really hurt anyone, because I am such a small fish.
But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn’t keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments. I have enough friends of friends that have spoken out since the implosion that I’m quite sure that in a more open, information-sharing environment I would have gotten that information. And if I’d gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to the collapse of FTX. Assigning culpability is hard here, but this isn’t just an abstract worry: I can think of one person I might bear some responsibility for, and another who I would be almost 100% responsible for, except they didn’t get the grant.
I think the encouragement I gave people represents a moral failure on my part. I should have realized I didn’t have enough information to justify it, even if I never heard about specific bad behavior. Hell even if SBF wasn’t an unreliable asshole, Future Fund could have turned off the fire hose for lots of reasons. IIRC they weren’t even planning on continuing the regrantor project.
But it would also have been cool if that low key, “don’t rely on Sam- I’m not accusing him of anything malicious, he’s just not reliable” type of information had circulated widely enough that it reached me and the other very small fish, especially the ones taking major risks that only made sense in an environment where FTX money flowed freely.
I don’t know what the right way to do that would have been. But it seems important to figure out.
I also suspect that in an environment where it was easy to find out that SBF was an unreliable asshole, it would have been easier to discover or maybe even prevent the devastating fraud, because people would have felt more empowered to say no to him. But that might be wishful thinking.
I think the encouragement I gave people represents a moral failure on my part. I should have realized I didn’t have enough information to justify it, even if I never heard about specific bad behavior.
I don’t know the specific circumstances of your or anyone else’s encouragement, so I want to be careful not to opine on any specific circumstances. But as a general matter, I’d encourage self-compassion for “small fish” [1] about getting caught up in “irrational exuberance.” Acting in the presence of suboptimal levels of information is unavoidable, and declining to act until things are clearer carries moral weight as well.
In retrospect, we know that the EA whispernet isn’t that reliable, that prominence in EA shouldn’t be seen as a strong indicator of reliability, that the media was asleep at the wheel, and that crypto investors exercise very minimial due dillgence. But I don’t think we should expect “small fish” to have known those things in 2021 and 2022.
Hell even if SBF wasn’t an unreliable asshole, Future Fund could have turned off the fire hose for lots of reasons. IIRC they weren’t even planning on continuing the regrantor project.
As far as other potential failure modes, I think an intelligent individual doing their due dilligence before making a major life decision would have spotted those risks. It would have been easy to find out (without relying on inside EA knowledge) that anything crypto is risky as hell, that anything involving a company that has only been in operation a few years is pretty risky on top of that, that SBF didn’t have a long track record of consistent philantrophy at this level, and (in most cases) that the grants were fairly short-term with no guarantee of renewal.
Given that we should be gentle and understanding toward small fish who relied on FTX funding to their detriment, I would extend similar gentleness and understanding toward small fish who encouraged others (at least without actively downplaying risks). So I think there’s a difference between encouragement that actively downplayed those risks, and encouragement that did not affirmatively include a recognition of the relatively high risk.
But it would also have been cool if that low key, “don’t rely on Sam- I’m not accusing him of anything malicious, he’s just not reliable” type of information had circulated widely enough that it reached me and the other very small fish, especially the ones taking major risks that only made sense in an environment where FTX money flowed freely.
I directionally agree, but think it is important to recognize that the signal of SBF’s unreliability would likely be contained in a sea of noise of inaccurate information, accurate information that didn’t predict future bad outcomes, and outright malicious falsehoods. A rational individual would discount the reports, with the degree of discount based on the signal:noise ratio of the whispernet and other factors.
Given the reasons FTX funding could fall through for reasons unrelated to SBF’s character or reliability, I predict that such a signal getting out would have had a meaningful—yet fairly modest—effect on proper risk evaluation by a small fish. For sure, an increase from 30% to 40% risk [making numbers up, but seems roughly plausible?] would have changed some decisions at the margin (either to accept grants or not to take more precautions).
But we would also need to weigh that other decisions would have changed for the worse because of noise in the whispernet about other people. While the tradeoff can be mitigated to some extent, I think it is largely inherent to whispernets and most other reputation-based systems. I generally think that assessing and communicating this sort of risk is very difficult, and that some sort of system for ameliorating the situation of people who get screwed is therefore a necessary piece of the solution. To me, this is similiar to how a rational response to the risk of fire includes both fire prevention (being aware of and mitigating risks) and fire insurance (because prevention is not a foolproof process).
I think expecting myself to figure out the fraud would be unreasonable. As you say, investors giving him billions of dollars didn’t notice, why should I, who received a few tens of thousands, be expected to do better due diligence? But I think a culture where this kind of information could have bubbled up gradually is an attainable and worthwhile goal.
E.g. I think my local community handled covid really well. That didn’t happen because someone wrote a big scary announcement. It was an accumulation of little things, like “this is probably nothing but always good to keep a stock of toilet paper” and “if this is airborne masks are probably useful”. And that could happen because those small statements were allowed. And I think it would have been good if people could similarly share small warnings about SBF as casually as they shared good things, and an increasingly accurate picture would emerge over time.
Am I understanding right that the main win you see here would have been protecting people from risks they took on the basis that Sam was reasonably trustworthy?
I also feel pretty unsure but curious about whether a vibe of “don’t trust Sam / don’t trust the money coming through him” would have helped discover or prevent the fraud—if you have a story for how it could have happened (e.g. via as you say people feeling more empowered to say no to him—maybe it would have via been his staff making fewer crazy moves on his behalf / standing up to him more?), I’d be interested.
“protect people from dependencies on SBF” is the thing for which I see a clear causal chain and am confident in what could have fixed it.
I do have a more speculative hope that an environment where things like “this billionaire firehosing money is an unreliable asshole” are easy to say would have gotten better outcomes for the more serious issues, on the margin. Maybe the FTX fraud was overdetermined, even if it wasn’t and I definitely don’t have enough insight to be confident in picking a correction. But using an abstract version of this case as an example for how I think a more open environment could have led to better outcomes:
My sense is SBF just kept taking stupid unethical bets and having them work out for him financially and socially. Maybe small consequences early on would have reduced the reward to stupid unethical bets.
Before the implosion, SBF(’s public persona) was an EA success story that young EAs aspired to copy. Less of that on the margin would probably lead to less fraud 5 years from now, especially in the world where the FTX fraud took longer to discover.
I think aping SBF’s persona was bad for other reasons, but they’re harder to justify.
SBF would have gotten more push back from staff (unless the fact that he was a known asshole made people more likely to leave, which seems good for them but not an improvement vis a vis fraud).
FTX would have had a harder time recruiting, which would have slowed them down.
Some EAs chose to trade on FTX out of ingroup loyalty, and maybe that would have happened less.
An environment where you’re free to share information about SBF being an unreliable asshole is more hospitable to sharing and hearing other negative information, and this has a snowball effect. Who knows what else would have been shared if the door had been open a crack.
Maybe Will MacAskill would have spent less time telling the press that SBF was a frugal virtue wunderkind.
Maybe other people would have told Will MacAskill to stop telling the press that SBF was a frugal virtue wunderkind.
Maybe Will MacAskill would have pushed that line to the press, but other people would have told the press “no he isn’t”, and that could have been a relatively gentle lesson for SBF and Will.
My sense is Will isn’t the only prominent EA who gave SBF a lot of press, just the most prominent and the one I heard the most about. Hopefully all of that would be reduced.
Maybe people would have been more open when considering if FTX money was essentially casino money, and what are the ethical implications of that?
Good posts generate a lot of positive externalities, which means they’re undersupplied, especially by people who are busy and don’t get many direct rewards from posting. How do we fix that? What are rewards relevant authors would find meaningful?
Here are some possibilities off the top of my head, with some commentary. My likes are not universal and I hope the comments include people with different utility functions.
Money. Always a classic, rarely disliked although not always prioritized. I’m pretty sure this is why LTFF and EAIF are writing more now.
Appreciation (broad). Some people love these. I definitely prefer getting them over not getting them, but they’re not that motivating for me. Their biggest impact on motivation is probably cushioning the blow of negative comments.
Appreciation (specific). Things like “this led to me getting my iron tested” or “I changed my mind based on X”. I love these, they’re far more impactful than generic appreciation.
High quality criticism that changes my mind.
Arguing with bad commenters.
One of the hardest parts of writing for me is getting a shitty, hostile comment, and feeling like my choices are “let it stand” or “get sucked into a miserable argument that will accomplish nothing”. Commenters arguing with commenters gets me out of this dilemma, which is already great, but then sometimes the commenters display deep understanding of the thing I wrote and that’s maybe my favorite feeling.
Deliberately not included: longer term rewards like reputation that can translate into jobs, employees, etc. I’m specifically looking for quick rewards for specific posts
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP’s Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don’t receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people’s attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
Whether the age at which one first engaged with EA predicts lower / higher future engagement with EA
A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.
Of course, much of our privately commissioned research is private, such that we couldn’t post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can’t do purely due to lack of funding. This includes:
More message testing research related to EA /longtermism (for an example see Will MacAskill’s comment referencing our work here), including but not limited to:
Testing the effectiveness of specific arguments for these causes
Testing how “longtermist” or “existential risk” or “effective altruist” or “global priorities” framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without
Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies
“We want to publish but can’t because the time isn’t paid for” seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
“We want to publish but can’t because the time isn’t paid for” seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”
I think we need to be a bit careful with this, as I saw many highly upvoted posts that in my opinion have been actively harmful. Some very clear examples:
Theses on Sleep, claiming that sleep is not that important. I know at least one person that tried to sleep 6 hours/day for a few weeks after reading this, with predictable results
In general, I think we should promote more posts like “Veg*ns should take B12 supplements, according to nearly-unanimous expert consensus” while not promoting posts like “Veg*nism entails health tradeoffs”, when there is no scientific evidence of this and expert consensus of the contrary. (I understand that your intention was not to claim that a vegan diet was worse than an average non-vegan diet, but that’s how most readers I’ve spoken to updated in response to your posts.)
I would be very excited about encouraging posts that broadcast knowledge where there is expert consensus that is widely neglected (e.g. Veg*ns should take B12 supplements), but I think it can also be very easy to overvalue hard-to-measure benefits, and we should keep in mind that the vast majority of posts get forgotten after a few days.
I think you are incorrectly conflating being mistaken and being “actively harmful” (what does actively mean here?) I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
Truth-seeking is a long game that is mostly about people exploring ideas, not about people trying to minimize false beliefs at each individual moment.
I think you are incorrectly conflating being mistaken and being “actively harmful”
That’s a fair point, I listed posts that were clearly not only mistaken but also harmful, to highlight that the cost-benefit analysis of “good posts” as a category is very non-obvious.
(what does actively mean here?)
I shouldn’t have used the term “actively”, I edited the comment.
I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
I fear that there’s a very real risk of building castles in the sky, where interesting true information gets mixed with interesting not-so-true information and woven into a misleading narrative that causes bad consequences, that this happens often, and that we should be mindful of that.
I should have explicitly mentioned it, but I mostly agree with Elizabeth’s quick take. I just want to highlight that while some “good posts” “generate a lot of positive externalities”, many other “good posts” are wrong and harmful (and many many more get forgotten after a few days). I’m also probably more skeptical of hard-to-measure diffuse benefits without a clear theory of change or observable measures and feedback loops.
That palette is not just great in the abstract, it’s great as a representation of LW. I did some very interesting anthropology with some non-rationalist friends explaining the meaning and significance of the weirder reacts.
A lot of what I explained was how specific reacts relate to one of the biggest pain points on LW (and EAF): shitty comments. The reacts are weirdly powerful, in part because it’s not the comments’ existence that’s so bad, it’s knowing that other people might read them and not understand they are shitty. I could explain why in a comment of my own, but that invites more shitty comments and draws attention to the original one. It’s only worth it if many people are seeing and believing the comment.
Emojis neatly resolve this. If several people mark a comment as soldier mindset, I feel off the hook for arguing with it. And if several people (especially people I respect) mark a comment as insightful or changing their mind, that suggests that at a minimum it’s worth the time to engage with the comment, and quite possibly I am in the wrong.
You might say I should develop a thicker skin so shitty comments bug me less, and that is probably be true on the margin, but I think it’s begging the question. Emojis give me valuable information about how a comment is received; positive emojis suggest I am wrong that it is shitty, or at least how obvious it is. It is good to respond differently to good comments, obviously shitty comments, and controversial comments differently, and detailed reacts make it much easier. So I think this was a huge win for LessWrong.
Meanwhile on the EAForum…
[ETA 2023-11-10: turns out I picked a feel-good thread with a special react palette to get my screen shots. I still think my point holds overall but regret the accidental exaggeration. I should have been more surprised when I went to get a screen shot and the palette wasn’t what I expected]
This palette has 5 emojis (clapping, party, heart, star, and surprise) covering maybe 2.5 emotions if you’re generous and count heart as care and not just love. It is considerably less precise than Facebook’s palette. I suspect the limited palette is an attempt to keep things positive, but given that negative comments are (correctly) allowed, this only limits the ability to cheaply push back.
I’ll bet EAF put a lot of thought into their palette. This isn’t even their first palette, I found out the palette had changed when I went to get a screen shot for this post. I would love to hear more about why they chose these 5.
This is not quite as bad as the feel-good palette (I’m so sorry!), but I still think leaves a tremendous amount of value on the table. It gives no way to give specific negative feedback to bad comments like “too combative” or “misunderstands position?” . It’s not even particularly good at compliments, except for “Changed my mind”.
How common do you think “shitty comments” are? And how well/poorly do you think the existing karma system provides an observer with knowledge that the user base “understand[s] they are shitty”? (To be sure, it doesn’t tell you if the voting users understand exactly why the comment is shitty.)
I’m not sure how many people would post attributed-to-them emojis if they weren’t already anonymously downvoting a comment for being shitty. So if they aren’t already getting significant downvotes, I don’t know how many negative emojis they would get here.
They’re especially useful for comments of mixed quality- e.g. someone is right and making an important point, but too aggressively. Or a comment is effortful, well-written, and correct within its frame, but fundamentally misunderstood your position. Or, god forbid, someone make a good point and a terrible point in the same comment. I was originally skeptical of line-level reacts but ended up really valuing them due to that.
There’s also reacts like “elaborate”, “taboo this word” and “example” that invite a commenter to correct problems, at which point the comment may become really valuable. Unfortunately there’s no notifications for reacts so this can really easily go unnoticed, but I it at least raises the option.
If I rephase your question as “how often do I see comments for which reacts convey something important I couldn’t say with karma?”; most of my posts since reacts came out have been controversial, so I’m using many comment reacts per post (not always dismissively).
I also find positive emojis much more rewarding than karma, especially Changed My Mind.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I don’t want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a “condensed” or “extended” emoji palette? Either way, just my two cents.
I agree EAF shouldn’t have a LW-sized palette, much less LW’s specific palette. I want EAF to have a palette that reflects its culture as well as LW’s palette reflects its culture. And I think that’s going to take more than 4 reacts (note that my original comment mortifyingly used a special palette made for a single post, the new version has the normal EAF reacts of helpful, insightful, changed my mind, and heart), but way less than is in the LW palette.
I do think part of LessWrong’s culture is preferring to have too many options rather than making do with the wrong one. I know the team has worked really hard to keep reacts to a manageable level, while making most of them very precise, while covering a wide swath of how people want to react. I think they’ve done an admirable job (full disclosure: I’m technically on the mod team and give opinions in slack, but that’s basically the limit of my power). This is something I really appreciate about LW, but I know shrinks its audience.
Hi! I think we might have a bug — I’m not sure where you’re seeing those emojis on the Forum. For me, here are the emojis that show up:
@Agnes Stenlund might be able to say more about how we chose those,[1] but I do think we went for this set as a way to create a low-friction way of sharing non-anonymous positive feedback (which authors and commenters have told us they lack, and some have told us that they feel awkward just commenting with something non-substantive but positive like “thanks!”) while also keeping the UX understandable and easy to use. I think it’s quite possible that it would be better to also add some negative/critical emojis, but I’m not very convinced right now and not convinced that it’s super promising relative to the others stuff we’re working on, & something we should dive deeper into. It won’t be my call in the end, regardless, but I’m definitely up for hearing arguments about why this is wrong!
Not a bug—it’s from Where are you donating this year, and why? which is grandfathered into an old experimental voting system (and it’s the only post with this voting system—there are a couple of others with different experimental systems).
I’m so sorry- I should have been more surprised when I went to get a screenshot and it wasn’t the palette I expected. I have comments set to notify me only once per day, so I didn’t get alerted to the issue until now.
I wrote this with the standard palette so I still think there is a problem, but I feel terrible for exaggerating it with a palette that was perfectly appropriate for its thread.
I’ll bet EAF put a lot of thought into their palette.
As Ollie mentioned, I made the set you referenced for just this one thread. As far as I remember it was meant to to support positive vibes in that thread and was done very quickly, so I would not say a lot of thought went into that palette.
A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).
But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn’t actually have the conscientiousness / wisdom / quick-thinking to do so. (Except, apparently, Elizabeth. Bravo, @Elizabeth!)
I appreciate the kudos here, but feel like I should give more context.
I think some of what led to me to renegotiate was a stubborn streak and righteousness about truth. I mostly hear when those traits annoy people, so it’s really nice to have them recognized in a good light here. But that righteous streak was greatly enabled by the fact that my mom is a lawyer who modeled reading legal documents before signing (even when it’s embarrassing your kids who just want to join their friends at the rockclimbing birthday party), and that I could afford to forgo severance. Obviously I really wanted the money, and I couldn’t afford to take this kind of stand every week. But I believe there were people who couldn’t even afford to add a few extra days, and so almost had to cave
To the extent people in that second group were unvirtuous, I think the lack of virtue occurred when they didn’t create enough financial slack to even have the time to negotiate. By the time they were laid off without a cushion it was too late. And that’s not available to everyone- Wave paid well, but emergencies happen, any one of them could have a really good reason their emergency fund was empty.
So the main thing I want to pitch here is that “getting yourself into a position where virtue is cheap” is an underrated strategy.
This is one benefit to paying people well, and a reason having fewer better-paid workers is sometimes better than more people earning less money. If your grants or salary give you just enough to live as long as the grants are immediately renewed/you don’t get fired, even a chance of irritating your source of income imperils your ability to feed yourself. 6 months expenses in savings gives you the ability to risk an individual job/grant. Skills valued outside EA give you the ability to risk pissing off all of EA and still be fine.
I’m emphasizing risk here because I think it’s the bigger issue. If you know something is wrong, you’ll usually figure out a way to act on it. The bigger problem is when you some concerns but they legitimately could be nothing, but worry that investigating will imperil your livelihood.
I sometimes argue against certain EA payment norms because they feel extractive, or cause recipients to incur untracked costs. E.g. “it’s not fair to have a system that requires unpaid work, or going months between work in ways that can’t be planned around and aren’t paid for”. This was the basis for some of what I said here. But I’m not sure this is always bad, or that the alternatives are better. Some considerations
if it’s okay for people to donate money I can’t think of a principled reason it’s not okay for them to donate time → unpaid work is not a priori bad.
If it would be okay for people to solve the problem of gaps in grants by funding bridge grants, it can’t be categorically disallowed to self-fund the time between grants.
If partial self-funding is required to do independent, grant-funded work, then only people who can afford that will do such work. To the extent the people who can’t would have done irreplaceably good work, that’s a loss, and it should be measured. And to the extent some people would personally enjoy doing such work but can’t, that’s sad for them. But the former is an empirical question weighed against the benefits of underpaying, and the latter is not relevant to impact.
I think the costs of blocking people who can’t self-fund from this kind of work are probably high, especially the part where it categorically prevents segments of society with useful information from participating. But this is much more relevant for e.g. global development than AI risk.
A norm against any unpaid work would mean no one could do anything unless they got funder approval ahead of time, which would be terrible.
A related problem is when people need to do free work (broadly defined, e.g. blogging counts) to get a foot in the door for paid work. This has a lot of the same downsides as requiring self-funding, but, man, seems pretty stupid to insist on ignoring the information available from free sources, and if you don’t ban it there will be pressure to do free work.
To me, “creating your own projects, which people use to inform their opinions of you” feels pretty different from “you must do 50 hours of task X unpaid before we consider a paying position”, but there are ambiguous cases.
it’s pretty common for salaried EAs to do unpaid work on top of their normal job. This feels importantly different to me from grant-funded people funding their own bridge loans, because of the job security and predictability. The issue isn’t just “what’s your take home pay per hour?”, it’s “how much ability to plan do you have?”
Any money you spend on one independent can’t be spent on someone else. To the extent EA is financially constrained, that’s a big cost.
It feels really important to me that costs of independence, like self-bridge-funding or the headache of grant applications, get counted in some meaningful sense, the same as donating money or accepting a low salary.
I feel like a lot of castle discourse missed the point.
By default, OpenPhil/Dustin/Owen/EV don’t need anyone’s permission for how they spend their money.
And it is their money, AFAICT open phil doesn’t take small donations. I assume Dustin can advocate for himself here.
One might argue that the castle has such high negative externalities it can be criticized on that front. I haven’t seen anything to convince me of that, but it’s a possibility and “right to spend one’s own money” doesn’t override that.
You could argue OpenPhil etc made some sort of promise they are violating by buying the castle. I don’t think that’s true- but I also think the castle-complainers have a legitimate grievance.
I do think the word “open” conveys something of a promise, and I will up my sympathy for open phil if they change their name. But my understanding is they are more open than most foundations.
My guess is that lots of people entered EA with inaccurate expectations, and the volume at which this happens indicates a systemic problem, probably with recruiting. They felt ~promised that EA wasn’t the kind of place where people bought fancy castles, or would at least publicly announce they’d bought a retreat center and justify it with numbers.
Highly legible, highly transparent parts of EA exist, and I’m glad they do. But it’s not all of EA, and I don’t think it should be. I think it’s important to hold people to commitments, and open phil at one point did have a commitment to transparency, but they publicly renounced it years ago so that’s no longer in play. I think the problem lies with the people who set the false expectations, which I imagine happened in recruiting.
It’s hard for me to be more specific than this because I haven’t followed EA recruiting very closely, so what reaches me tends to be complaints about the worst parts. My guess is this lies in the more outward facing parts of Effective Ventures (GWWC, 80k, CEA’s university recruiting program, perhaps the formalization of EA groups in general).
[I couldn’t quickly verify this but my understanding is open phil provides a lot of the funding for at least some of these orgs, in which case it does bear some responsibility for the misleading recruiting]
I would like to see recruiting get more accurate about what to expect within EA. I want that partially because honesty is generally good, partially because this seems like a miserable experience for people who have been misled. And partially because I want EA to be a weird do-ocracy, and recruiting lots of people who object to doing weird things without permission slows that down.
I think the first point here—that the buyers “don’t need anyone’s permission” to purchase a “castle”—isn’t contested here. Other than maybe the ConcernedEA crowd, is anyone claiming that they were somehow required to (e.g.) put this to a vote?
I think the “right to spend one’s own money” in no way undermines other people’s “right to speak one’s own speech” by lambasting that expenditure. In the same way, my right to free speech doesn’t prevent other people from criticizing me for it, or even deciding not to fund/hire me if I were to apply for funding or a job. There are circumstances in which we have—or should have—special norms against negative reactions by third parties; for instance, no one should be retailiated against for reporting fraud, waste, abuse, harassment, etc. But the default rule is that what the critics have said here is fair game.
A feeling of EA having breached a “~promise[]” isn’t the only basis for standing here. Suppose a non-EA megadonor had given a $15MM presumably tax-deductible donation to a non-EA charity for buying a “castle.” Certainly both EAs and non-EAs would have the right to criticize that decision, especially because the tax-favored nature of the donation meant that millions’ worth of taxes were avoided by the donation. If one wishes to avoid most public scrutiny, one should make it clear that the donation was not tax-advantaged. In that case, it’s the same as the megadonor buying a “castle” for themselves.
Moreover, I think the level of negative externalities required to give third-party EAs standing to criticize is quite low. The “right to speak one’s own speech” is at least as fundamental as the proposed “right to spend one’s own money.” If the norm is going to be that third parties shouldn’t criticize—much less take adverse actions against—an EA entity unless the negative PR & other side effects of the entity’s action exceed those of the “castle” purpose, then that would seem a pretty fundamental shift in how things work. Because the magnitude of most entities’ actions—especially individuals—are generally an order of magnitude (or more) less than the magnitude of OP and EVF’s actions, the negative externalities will almost never meet this standard.
I 100% agree with you that people should be and are free to give their opinions, full stop.
Many specific things people said only make sense to me if they have some internal sense that they are owed a justification and input (example, example, example, example).
I almost-but-don’t-totally reject PR arguments. EA was founded on “do the thing that works not the thing that looks good”. EAs encourage many other things people find equally distasteful or even abhorrent, because they believe it does the most good. So “the castle is bad PR” is not a good enough argument, you need to make a case for “the castle is bad PR and meaningfully worse than these other things that are bad PR but still good”. I believe things in that category exist, and people are welcome to make arguments that the castle is one of them, but you do have to make the full argument.
I think you’re slightly missing the point of the ‘castle’ critics here.
By default, OpenPhil/Dustin/Owen/EV don’t need anyone’s permission for how they spend their money. And it is their money, AFAICT open phil doesn’t take small donations. I assume Dustin can advocate for himself here.
One might argue that the castle has such high negative externalities it can be criticized on that front. I haven’t seen anything to convince me of that, but it’s a possibility and “right to spend one’s own money” doesn’t override that.
Technically this is obviously true. And it was the main point behind one of the most popular responses to FTX and all the following drama. But I think that point and the post misses people’s concerns completely and comes off as quite tone-deaf.
To pick an (absolutely contrived) example, let’s say OpenPhil suddenly says it now believes that vegan diets are more moral and healthier than all other diets, and that B12 supplementation increases x-risk, and they’re going to funnel billions of dollars into this venture to persuade people to go Vegan and to drone-strike any factories producing B12. You’d probably be shocked and think that this was a terrible decision and that it had no place in EA.
OpenPhil saying “it’s our money, we can do what we want” wouldn’t hold much water for you, and the same thing I think goes for the Wytham Abbey critics—who I think do have a strong initial normative point that £15m counterfactually could do a lot of good with the Against Malaria Foundation, or Hellen Keller International.
Like it’s not just a concern about ‘high negative externalities’, many people saw this purchase, along with the lack of convincing explanation (to them), and think that this is just a negative EV purchase, and also negative with externalities—and then there was little in explanation forthcoming to change their mind.
I think OpenPhil maybe did this thinking it was a minor part of their general portfolio, without realising the immense power, both explicit and implicit, they have over the EA community, its internal dyanmics, and its external perception. They may not officially be in charge of EA, but by all accounts unofficially it works something like that (along with EVF), and I think that should at least figure into their decision-making somewhere
My guess is that lots of people entered EA with inaccurate expectations, and the volume at which this happens indicates a systemic problem, probably with recruiting. They felt ~promised that EA wasn’t the kind of place where people bought fancy castles, or would at least publicly announce they’d bought a retreat center and justify it with numbers.
open phil at one point did have a commitment to transparency, but they publicly renounced it years ago so that’s no longer in play.
Is the retreat from transparency true? If there are some references you could provide me for this? I also feel like there’s a bit of ‘take-it-or-leave-it’ implicit belief/attitude from OpenPhil here if true which I think is unfortunate and, honestly, counterproductive.
I would like to see recruiting get more accurate about what to expect within EA, but I’m not sure what that would look like. I mean I still think that EA “not being the kind of place where people buy fancy castles” is is a reasonable thing to expect and want from EA overall? So I’m not sure that I disagree that people are entering with these kind of expectations, but I’m confused about why you think it’s innacurate? Maybe it’s descriptively inaccurate but I’m a lot less sure that it’s normatively inaccurate?
Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldn’t have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but haven’t seen any that I’ve found convincing.
who I think do have a strong initial normative point that £15m counterfactually could do a lot of good with the Against Malaria Foundation, or the Hellen Keller Foundation.
My understanding is:
Open Phil deliberately doesn’t fill the full funding gap of poverty and health-focused charities.
While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
Is the retreat from transparency true? If there are some references you could provide me for this?
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesn’t mean the money couldn’t have been used profitably elsewhere in a similar organization.
are you sure GW has charities that meet their bar that they aren’t funding as much as they want to? I’m pretty sure that used to not be the case, although maybe it has changed. There’s also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But I’m pretty sure at one point they just didn’t have anywhere they wanted to give more money to, and I don’t know if that has changed.
2023: “We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving.”
Giving Season 2022: “We’ve set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled.”
July 2022: “we don’t expect to have enough funding to support all the cost-effective opportunities we find.” Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunities—“If other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, we’ll ask Open Philanthropy to donate to priorities further down our list. It won’t give less funding overall—it’ll just fund the next-highest-priority needs.”
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification “the community doesn’t own its donor’s money” as a defence for a donation/grant
On the negative externality point, maybe I didn’t make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of ‘was it a good donation or not’.
I take the point about Open Phil’s funding gap perhaps being the more fundamental/important issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. It’s a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. There’s a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. I’d be interested to hear what you think a more realistic ‘recruiting pitch’ to EA would look like, but don’t feel the need to spell that out if you don’t want.
I think a retreat center is a justifiable idea, I don’t have enough information to know if Wytham in particular was any good, and… I was going to say “I trust open phil” here, but that’s not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesn’t.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. They’re of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
I’d be interested to hear what you think a more realistic ‘recruiting pitch’ to EA would look like, but don’t feel the need to spell that out if you don’t want.
This is the million dollar question. I don’t feel like I have an answer, but I can at least give some thoughts.
I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting.
Back in 2015 three different EA books came out- Singer’s The Most Good You Can Do, MacAskill’s Doing Good Better, and Nick Cooney’s How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskill’s book felt like he had all the answers and was giving the reader instructions, and Singer’s had the same issues. I wish EA recruiting looked more like Cooney’s book and less like MacAskill’s.
That’s a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and it’s been 8 years since I read it.
I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effective. It creates a mix of decision paralysis, excess deference, and pushes people into projects too ambitious for them to learn from, much less succeed at.
I’m interested in what Charity Entrepreneurship thinks we should do. They consistently incubate the kind of small, gritty projects I think make up the substrate of a healthy ecosystem. TBH I don’t think any of their cause areas are as impactful as x-risk, but succeeding at them is better than failing to influence x-risk, and they’re skill-building while they do it. I feel like CE gets that real work takes time, and I’d like to see that attitude spread.
@Caleb Parikh has talked about how he grades people coming from “good” EA groups more harshly, because they’re more likely to have been socially pressured into “correct” views. That seems like a pretty bad state of affairs.
I think my EA group (seattle, 2014) handled this fantastically, there was a lot of arguing with each other and with EA doctrine. I’d love to see more things look like that. But that was made up heavily of adult rationalists with programming jobs, not college students.
Addendum: I just checked out Wytham’s website, and discovered they list six staff. Even if those people aren’t all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK they’re not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open phil’s problem to deal with, but it seemed good to give an update.
“I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting. ”—I’m interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, it’s based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is “drowning child in a shallow pond”, and the shallow pond smuggles in a bunch of assumptions that aren’t true for global health and poverty. Such as
“we know what to do”, “we know how to implement it”, and “the downside is known and finite”, which just don’t hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow haven’t been fully funded, the average person’s ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be “drowning child in a fast moving river when you don’t know how to swim”.
I think Peter Singer believes this so he’s not being inconsistent, I just think he’s wrong.
“you can fix this with a single action, after which you are done.” Solving poverty for even a single child is a marathon.
“you are the only person who can solve this”. I think there is something good about getting people to feel ownership over the problem and avoiding bystandard effect, but falsely invoking an analogy to a situation where that’s true is not the way to do it.
A single drowning child can be fixed via emergency action. A thousand drowning children scattered across my block, replenishing every day, requires a systemic fix. Maybe a fence, or draining the land. And again, the fight or flight mode suitable for saving a single child in a shallow pond is completely inappropriate for figuring out and implementing the systemic solution.
EA is much more about saying “sorry actively drowning children, I can more good by putting up this fence and preventing future deaths”.
When Singer first made the analogy clothes were much more expensive than they are now, and when I see the argument being made it’s typically towards people who care very little about clothes. What was “you’d make a substantial sacrifice if a child’s life was on the line” has become “you aren’t so petty as to care about your $30 fast fashion shoes, right?”. Just switching the analogy to “ruining your cell phone” would get more of the original intent.
Do people still care about drowning child analogy? Is it still used in recruiting? I’d feel kind of dumb railing against a point no one actually believed in.
I will say I also never use the Drowning Child argument. For several reasons:
I generally don’t think negative emotions like shame and guilt are a good first impression/initial reason to join EA. People tend to distance themselves from sources of guilt. It’s fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
I prefer to minimise use of thought experiments/hypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally “doing good, better” in the sense of getting people to engage more thoughtfully with values they already hold.
Just a quick comment that I strong upvoted this post because of the point about violated expectations in EA recruitment, and disagree voted because it’s missing some important points of why EAs should be concerned about how OP and other EA orgs spend their EA money.
I feel similarly to Jason and JWS. I don’t disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
Thanks, that is interesting and feels like it has conversational hooks I haven’t heard before.
What would it mean to say Open Phil was justified or not justified in being the de facto head of the community? I assume you mean morally justified, since it seems pretty logical on a practical level.
Supposing a large enough contingent of EA decided it was not justified; what then? I don’t think anyone is turning down funding for the hell of it, so giving up open phil money would require a major restructuring. What does that look like? Who drives it? What constitutes large enough?
Example comment about how much some EAs defer to OP even when they know it’s bad reasoning.
OP’s epistemics are seen as the best in EA and jobs there are the most desirable.
The recent thread about OP allocating most of its neartermist budget to FAW and especially its comments shows much reduced deference (or at least more openly taking such positions) among some EAs.
As more critical attention is turned towards OP among EAs, I expect deference will reduce further. E.g. some of David Thorstad’s critical writings have been cited on this forum on this.
I expect this will continue happening organically, particularly in response to failures and scandals, and the castle played a role in reduced deference.
Hard power
I agree no one is turning down money willy-nilly, but if we ignore labels, how much OP money and effort actually goes into governance and health for the EA community, rather than recruitment for longtermist jobs?
In other words, I’m not convinced it would require restructuring or just structuring.
A couple of EAs I spoke to about reforms both talked about how huge sums of money are needed to restructure the community and it’s effectively impossible without a megadonor. I didn’t understand where they were coming from. Building and managing a community doesn’t take big sums of money and EA is much richer than most movements and groups.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
Of course this depends on what one’s vision for the EA community is.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
The math suggests that the meta would look much different in this world. CEA’s proposed budget for 2024 is $31.4MM by itself, about half for events (mostly EAG), about a quarter for groups. There are of course other parts of the meta. There were 3567 respondents to the EA Survey 2022, which could be an overcount or undercount of the number of people who might join a fee-paying society. Only about 60% were full-time employed or self-employed; most of the remainder were students.
Maybe a leaner, more democratic meta would be a good thing—I don’t have a firm opinion on that.
To make sure I understand; this is an answer to “what should EA do if it decides OpenPhil’s power isn’t justified?” And the answer is “defer less, and build a grassroots community structure?”
I’m not sure what distinction you’re pointing at with structure vs. restructure. They both take money that would have to come from somewhere (although we can debate how much money). Maybe you mean OP wouldn’t actively oppose this effort?
To the first: Yup, it’s one answer. I’m interested to hear other ideas too.
Structure vs restructuring: My point was that a lot of the existing community infrastructure OP funds is mislabelled and is closer to a deep recruitment funnel for longtermist jobs rather than infrastructure for the EA community in general. So for the EA community to move away from OP infrastructure wouldn’t require relinquishing as much infrastructure as the labels might suggest.
For example, and this speaks to @Jason’s comment, the Center for Effective Altruism is primarily funded by the OP longtermist team to (as far as I can tell) expand and protect the longtermist ecosystem. It acts and prioritizes accordingly. It is closer to a longtermist talent recruitment agency than a center for effective altruism. EA Globals (impact often measured in connections) are closer to longtermist job career fairs than a global meeting of effective altruists. CEA groups prioritize recruiting people who might apply for and get OP longtermist funding (“highly engaged EAs”).
I think we have a lot of agreement in what we want. I want more community infrastructure to exist, recruiting to be labeled as recruiting, and more people figuring out what they think is right rather than deferring to authorities.
I don’t think any of these need to wait on proving open phil’s power is unjustified. People can just want to do them, and then do them. The cloud of deference might make that harder[1], but I don’t think arguing about the castle from a position of entitlement makes things better. I think it’s more likely to make things worse.
Acting as if every EA has standing to direct open phil’s money reifies two things I’d rather see weakened. First it reinforces open phil’s power, and promotes deference to it (because arguing with someone implies their approval is necessary). But worse, it reinforces the idea that the deciding body is the EA cloud, and not particular people making their own decisions to do particular things[2]. If open phil doesn’t get to make its own choices without community ratification, who does?
I remember reading a post about a graveyard of projects CEA had sniped from other people and then abandoned. I can’t find that post and it’s a serious accusation so I don’t want to make it without evidence, but if it is true, I consider it an extremely serious problem and betrayal of trust.
narrow is meant to be neutral to positive here. No event can be everything to all people, I think it’s great they made an explicit decision on trade-offs. They maybe could have marketed it more accurately. They’re moving that way now and I wish it had gone farther earlier. But I think even perfectly accurate marketing would have left a lot of people unhappy.
Maybe some people argued from a position of entitlement. I skimmed the comments you linked above and I did not see any entitlement. Perhaps you could point out more specifically what you felt was entitled, although a few comments arguing from entitlement would only move me a little so this may not be worth pursuing.
The bigger disagreement I suspect is between what we think the point of EA and the EA community is. You wrote that you want it to be a weird do-ocracy. Would you like to expand on that?
Maybe you two might consider having this discussion using the new Dialogue feature? I’ve really appreciated both of your perspectives and insights on this discussion, and I think the collaborative back-and-forth your having seems a very good fit for how Dialogues work.
So in this hypothetical, certain functions transfer to the fee-paying society, and certain functions remain funded by OP. That makes sense, although I think the range of what the fee-paying society can do on fees alone may be relatively small. If we estimate 2,140 full fee-payers at $200 each and 1,428 students at $50 each, that’s south of $500K. You’d need a diverse group of EtGers willing to put up $5K-$25K each for this to work, I suspect. I’m not opposed; in fact, my first main post on the Forum was in part about the need for the community to secure independent funding for certain epistemically critical functions. I just want to see people who advocate for a fee-paying society to bite the bullet of how much revenue fees could generate and what functions could be sustained on that revenue. It sounds like you are willing to do so.
But looping back to your main point about “huge amounts of soft and hard power over the EA community” held by OP, how much would change in this hypothetical? OP still funds the bulk of EA, still pays for the “recruitment funnel,” pays the community builders, and sponsors the conferences. I don’t think characterizing the bulk of what CEA et al. do as a “recruitment funnel” for the longtermist ecosystem renders those functions less important as sources of hard and soft power. OP would still be spending ~ $20-$30MM on meta versus perhaps ~ $1-2MM for the fee-paying society.
OP and most current EA community work takes a “Narrow EA” approach. The theory of change is that OP and EA leaders have neglected ideas and need to recruit elites to enact these ideas. Buying castles and funding expensive recruitment funnels is consistent with this strategy.
I am talking about something closer to a big tent EA approach. One vision could be to help small and medium donors in rich countries spend more money more effectively on philanthropy, with a distinctive emphasis on cause neutrality and cause prioritization. This can and probably should be started in a grassroots fashion with little money. Spending millions on fancy conferences and paying undergraduate community builders might be counter to the spirit and goals of this approach.
A fee-paying society is a natural fit for big tent EA and not for narrow EA.
I didn’t know that the huge amounts of power held by OP was my main point! I was trying to use that to explain why EA community members were so invested in the castle. I’m not sure I succeeded, especially since I agree with @Elizabeth’s points that no one needs to wait for permission from OP or anyone else to pursue what they think is right, and the EA community cannot direct OP’s donations.
I personally would love to see a big-tent organization like the one you describe! I think it less-than-likely that the existence of such an organization would have made most of the people who were “so invested in the castle” significantly less so. But there’s no way to test that. I agree that a big-tent organization would bring in other people—not currently involved in EA—who would be unlikely to care much about the castle.
“Castles”, plural. The purchase of Wytham Abbey gets all the attention, but everyone ignores that during that same time there was also the purchase of a chateau in Hostačov using FTX funding.
I think an underappreciated part of castlegate is that it fairly easily puts people in an impossible bind.
EA is a complicated morass, but there are a few tenets that are prominent, especially early on. These may be further simplified, especially in people using EA as treatment for their scrupulosity issues. For most of this post I’m going to take that simplified point of view (I’ll mark when we return to my own beliefs).
Two major, major tenets brought up very early in EA are:
You should donate your money to the most impactful possible cause
Some people will additionally internalize “The most impactful in expectation”
GiveWell and OpenPhil have very good judgment.
The natural conclusion of which is that donating GiveWell or OpenPhil-certified causes is a safe and easy way to fulfill your moral duty.
If you’re operating under those assumptions and OpenPhil funds something without making their reasoning legible, there are two possibilities:
The opportunity is bad, which at best means OpenPhil is bad, and at worst means the EA ecosystem is trying to fleece you.
The opportunity is good but you’re not allowed to donate to it, which leaves you in violation of tenet #1.
Both of which are upsetting, and neither of which really got addressed by the discourse.
I don’t think these tenets are correct, or at least they aren’t complete. I think goodharting on a simplified “most possible impact” metric leads very bad places. And I think that OpenPhil isn’t even trying to have “good judgment” in the sense that tenet #2 means it. Even if they weren’t composed of fallible humans, they’re executing a hits-based strategy that means you shouldn’t expect every opportunity to be immediately, legibly good. That’s one reason they don’t ask for money from small donors. Which means OpenPhil funding things that aren’t legibly good doesn’t put me in any sort of bind.
I think it would be harmful to force all of EA to fit the constraints imposed by these two tenets. But I think enough people are under the impression it should that it rises to a level of problem worth addressing, probably through better messaging.
because it’s not legible, and willingness to donate to illegible things opens you up to scams.
OpenPhil also discourages small donations, I believe specifically because they don’t want to have to justify their decisions to the public, but I think will accept them.
Saying you’re not allowed to donate to the projects is much stronger than either of these things though. E.g. re your 2nd point, nothing is stopping someone from giving top up funding to projects/people that have received OpenPhil funding, and I’m not sure anyone feels like they’re being told they shouldn’t? E.g. the Nonlinear Fund was doing exactly this kind of marginal funding.
I agree they’re allowed to seek out frontier donations, or for that matter give to Open Phil. I believe that this doesn’t feel available/acceptable, on an emotional level, to a meaningful portion of the EA population, who have a strong need for both impact and certainty.
Salaries at direct work orgs are a frequent topic of discussion, but I’ve never seen those conversations make much progress. People tend to talk past each other- they’re reading words differently (“reasonable”), or have different implicit assumptions that change the interpretation. I think the questions below could resolve a lot of the confusion (although not all of it, and not the underlying question. Highlighting different assumptions doesn’t tell you who’s right, it just lets you focus discussions on the actual disagreements).
Here’s my guess for the important questions. Some of them are contingent- e.g. you might think new grad generalists and experienced domain experts should be paid very differently. Feel free to give as many sets of answers as you want, just be clear which answers lump together, so no one misreads your expert salary as if it was for interns.
What kind of position are you thinking about?
Experienced vs. new grad
Domain expertise vs generalist?
Many outside options vs. few?
Founder vs employee?
What salary are you thinking about?
What living conditions do you expect this salary to buy?
Housing?
Location?
Kids?
Food?
Savings rate
What is your bar for “enough” money? Not keeling over dead? Peak productivity but miserable? Luxury international travel 2x/year?
What percentage of people can reach that state with your suggested salary?
Some things that might make someone’s existence more expensive:
health issues (physical and mental)
Kids
Introversion
Ailing parents
Distant family necessitating travel.
Burnout requiring unemployed period.
What do you expect to happen for people who can’t thrive in those conditions?
If you lost your top choice due to insufficient salary, how good do you expect the replacement to be?
What is your counterfactual for the money saved on salary?
People often cite EA salaries as higher than other non-profits, but my understanding is that most non-profits pay pretty badly. Not “badly” as in “low”, but “badly” as in “they expect credentials, hours, and class signals that are literally unaffordable on the salary they pay. The only good employees who stick around for >5 years have their bills paid by a rich spouse or parent.”
So I don’t think that argument in particular holds much water.
n = 1 but my wife has worked in non-EA non-profits her whole career, and this is pretty much true. Its mostly women earning poorly at the non-profit, while husbands makes bank at the big corporate.
Where does this idea come from Elizabeth? From my experience (n=10) this argument is incorrect. I know a bunch of people who work in these “badly” paying jobs you talk of who defy your criteria, they don’t have their bills paid for by a rough parent—instead they are content with their work and accept a form of “salary sacrifice” mindset even if they wouldn’t phrase it in those EA terms.
EA doesn’t have a monopoly on altruism, there are plenty of folks out there living simply and working for altruistic causes they believe in even thought it doesn’t pay well and they could be earning way more elsewhere., outside of conventional market forces.
The sense I get reading this is that you feel I’ve insulted your friends, who have made a big sacrifice to do impactful work. That wasn’t my intention and I’m sorry it came across that way. From my perspective, I am respecting the work people do by suggesting they be paid decently.
First, let me take my own advice and specify what I mean by decently: I think people should be able to have kids, have a sub-30 minute commute, live in conditions they don’t find painful (people only live with housemates if they like it, not physically dangerous, outdoor space if they need that to feel good. Any of these may come at at trade off with the others, probably no one gets all of them, but you shouldn’t be starting out from a position where it’s impossible to get reasonable needs met), save for retirement, have cheap vacations, have reasonably priced hobbies, pay their student loans, and maintain their health (meaning both things like healthcare, and things like good food and exercise). If they want to own their home, they shouldn’t be too many years behind their peers in being able to do so.
I think it is both disrespectful to the workers and harmful to the work to say that people don’t deserve these things, or should be willing to sacrifice it for the greater good. Why on earth put the pressure on them to accept less[1], and not on high-earners to give more? This goes double for orgs that require elite degrees or designer clothes: if you want those class signals, pay for them.
Hey Elizabeth—just to clarify I don’t think you’ve insulted my friends at all don’t worry about that—I just disagreed from my experience at least that was the situation with most NGO workers like you claimed. I get that you are trying to respect people by pushing for them to be paid more it’s all good.
As a small note, I don’t think they have made a “big sacrifice” at all, most wouldn’t say they have made any sacrifice at all. They have traded earning money (which might mean less to them than for other people anyway) for a satisfying job while living a (relatively) simple lifestyle which they believe is healthy for themselves and the planet. Personally I don’t consider this a sacrifice either, just living your best life!
I’m going to leave it here for now (not in a bad way at all) because I suspect our underlying worldviews differ to such a degree here that it may be hard to debate these surface salary and lifestyle issues without first probing at deeper underlying assumptions here about happiness, equality, “deserving” etc., which would take a deeper and longer discussion that might be tricky on a forum back and forth
Not saying I’m not up for discussing these things in general though!
I tested a version of these here, and it worked well. A low-salary advocate revealed a crux they hadn’t before (there is little gap between EA orgs’ first and later choice candidates), and people with relevant data shared it (the gap may be a 50% drop in quality, or not filling the position at all).
This is an interesting model—but what level of analysis do you think is best for answering question 7? One could imagine answering this question on:
the vacancy level at the time of hire decision (I think Bob would be 80% as impactful as the frontrunner, Alice)
the vacancy level at the time of posting (I predict that on average the runner-up candidate will be 80% as the best candidate would be at this org at this point in time)
the position level (similar, but based on all postings for similiar positions, not just this particular vacancy at this point in time)
the occupational field level (e.g., programmer positions in general)
the organizational level (based on all positions at ABC Org; this seems to be implied when an org sets salaries mainly by org-wide algorithm)
the movement-wide level (all EA positions)
the sector-wide level (which could be “all nonprofits,” “all tech-related firms,” etc.)
the economy-wide level.
I can see upsides and downsides to using most of these to set salary. One potential downside is, I think, common to analyses conducted at a less-than-organizational level.
Let’s assume for illustrative purposes that 50% of people should reach the state specified in question 4 with $100K, and that the amount needed is normally distributed with a standard deviation of $20K due to factors described in step five and other factors that make candidates need less money. (The amount needed likely isn’t normally distributed, but one must make sacrifices for a toy model.) Suppose that candidates who cannot reach the question-4 state on the offered salary will decline the position, while candidates who can will accept. (Again, a questionable but simplifying assumption.)
One can calculate, in this simplified model, the percentage of employees who could achieve the state at a specific salary. One can also compute the amount of expected “excess” salary paid (i.e., the amounts that were more than necessary for employees to achieve the desired state).
If the answer to question 7 is that losing the top candidate would have severe impact, one might choose a salary level at which almost all candidates could achieve the question-four state—say, +2.5 SD (i.e., $150K) or even +3 SD ($160K). But this comes at a cost, the employer has likely paid quite a bit of “excess” salary (on average, $50K of the $150K salary will be “excess”).
On the other hand, if there are a number of candidates of almost equivalent quality, it might be rational to set the salary offer at $100K, or even at −0.5 SD ($90K), accepting that the organization will lose a good percent of the candidates as a result.
I suspect you would then have a morale problem with certain employees running the numbers and concluding that they were seen as considerably more replaceable than others who were assigned the same level!
You can fix that by answering question 7 at the organizational or movement levels, averaging the answers for all positions. Suppose that analysis led to the conclusion that your org should offer salaries at this position grade level based on +1 SD ($120K). But you’re still running a 16% risk that the top candidate for the position with no good alternative will decline, while you’re not getting much ROI for the “excess” money spent for certain other positions. You could also just offer $150K to everyone at that level, but that’s harder to justify in the new world of greater funding constraints.
In sum, the mode of analysis that I infer from your questions seems like it would be very helpful when looking at a one-off salary setting exercise, but I’m unsure how well it would scale.
Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.
two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I’m working. It’s like holding a magnet away from its twin; you can do it, but the minute you stop the system will snap back into place.
But more than I am lazy and fearful, I am easily bored, and hate boredom even more than I hate work or failure. If I hang around my comfort zone long enough I get bored of it and naturally start exploring outside. And that expansion doesn’t take energy; in fact it takes energy to keep me in at that point.
My mom used a really simple example of this on my brother when he was homeschooled (6th grade). He’d had some fairly traumatic experiences in English class and was proving resistant to all her teaching methods. Finally she sat him down in front a computer and told him he had to type continuously for X minutes. It could be literally anything he wanted, including “I can’t think of anything to write about”, he just had to keep his fingers typing the entire time (he could already touch type at this point, mom bribed us with video games until we got to 60WPM). I don’t remember exactly how long this took to work, I think it took her a while to realize she had to ban copy/paste but the moment she did my brother got so bored of typing the same thing that he typed new things, and then education could slip in.
So I’m not worried about being stuck, because I will definitely gnaw my own leg off just to feel something if that happens. And it’s unclear if I can speed up the process by pushing myself outside faster, because leaving comfort zone ring n too early delays getting bored of it (although done judiciously it might speed up the boredom).
I’ll be at EAGxVirtual this weekend. My primary goal is to talk about my work on epistemics and truthseeking within EA, and especially get the kind of feedback that doesn’t happen in public. If you’re interested, you can find me on the usual channels.
I’m pretty sure you can’t have consequentialist arguments for deceptions of allies or self, because consequentialism relies on accurate data. If you’ve blinded yourself then you can have the best utility function in the world and it will do you no good because you’re applying it to gibberish.
GET AMBITIOUS SLOWLY
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I’m sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
try anyway and fail badly, probably too badly for it to even be an educational failure.
fake it, probably without knowing they’re doing so
learned helplessness, possible systemic depression
be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you’d never started to the one where they had to rescue you.
discover more skills than they knew. feel great, accomplish great things, learn a lot.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or “get ambitious slowly”. Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures.
I claim EA’s emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago
What size of challenge is the right size? I’ve thought about this a lot and don’t have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules:
stick to problems where failure will at least be informative. If you can’t track reality well enough to know why a failure happened you definitely* need an easier project.
if your talk gives people a lot of ambitions to save the world/build billion dollar companies but their mind goes blank when they contemplate starting a freelancing business, the ambition is fake.
This post is very popular on Twitter https://x.com/eaheadlines/status/1690624321117388800?s=46&t=7jI2LUFFCdoHtZr1AtWyCA
Hmm, I personally think “discover more skills than they knew. feel great, accomplish great things, learn a lot” applies a fair amount to my past experiences, and I think aiming too low was one of the biggest issues in my past, and I think EA culture is also messing up by discouraging aiming high, or something.
I think the main thing to avoid is something like “blind ambition”, where your plan involves multiple miracles and the details are all unclear. This seems also a fairly frequent phenomenon.
I think that you in particular might be quite non-representative of EAs in general, in terms of “success” in the EA context. If I imagine a distribution of “EA success,” you are probably very far to the right.
Accepting your self-report as a given, I have a bunch of questions.
I want to say that I’m not against ambition. From my perspective I’m encouraging more ambition, by focusing on things that might actually happen instead of daydreams.
Does the failure mode I’m describing (people spinning their wheels on fake ambition) make sense to you? Have you seen it?
I’m really surprised to hear you describe EA as discouraging aiming high. Everything I see encourages aiming high, and I see a bunch of side effects of aiming too high littered around me. Can you give some examples of what you’re worried about?
What do you think would have encouraged more of the right kind of ambition for you? Did it need to be “you can solve global warming?”, or would “could you aim 10x higher?” be enough?
Feeling a bit tired to type a more detailed response, but I think I mostly agree with what you say here.
I’m a bit confused about this because “getting ambitious slowly” seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already “ambitious”; unless you’re really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn’t work to say I’m going to get ambitious slowly.
What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.
Does what I said here and here answer this? the goal isn’t “put the breaks on internally motivated ambition”, it’s “if you want to get unambitious people to do bigger projects, you will achieve your goal faster if you start them with a snowball rather than try to skip them straight to Very Big Plans”.
I separately think we should be clearer on the distinction between goals (things you are actively working on, have a plan with concrete next steps and feedback loops, and could learn from failure) and dreams (things you vaguely aspire and maybe are working in the vicinity of, but no concrete plans). Dreams are good, but the proper handling of them is pretty different from that of goals.
I also liked this quote from Obama on a similar theme. The advice is pretty common for very good reasons but hearing it from former POTUS had more emotional strength on me:
”how do we sustain our own sense of hope, drive, vision, and motivation? And how do we dream big? For me, at least, it was not a straight line. It wasn’t a steady progression. It was an evolution that took place over time as I tried to align what I believed most deeply with what I saw around me and with my own actions.
(...)
The first stage is just figuring out what you really believe. What’s really important to you, not what you pretend is important to you. And what are you willing to risk or sacrifice for it? The next phase is then you test that against the world, and the world kicks you in the teeth. It says, “You may think that this is important, but we’ve got other ideas. And who are you? You can’t change anything.”
Then you go through a phase of trying to develop skills, courage, and resilience. You try to fit your actions to the scale of whatever influence you have. I came to Chicago and I’m working on the South Side, trying to get a park cleaned up or trying to get a school improved. Sometimes I’m succeeding, a lot of times I’m failing. But over time, you start getting a little bit of confidence with some small victories. That then gives you the power to analyze and say, “Here’s what worked, here’s what didn’t. Here’s what I need more of in order to achieve the vision or the goals that I have.” Now, let me try to take it to the next level, which means then some more failure and some more frustration because you’re trying to expand the orbit of your impact.
I think it’s that iterative process. It’s not that you come up with a grand theory of “here’s how I’m going to change the world” and then suddenly it all just goes according to clockwork. At least not for me. For me, it was much more about trying to be the person I wanted to believe I was. And at each phase, challenging myself and testing myself against the world to see if, in fact, I could have an impact and make a difference. Over time, you’ll surprise yourself, and it turns out that you can.”
The problem with this advice is that many people in EA don’t think we have enough time to slowly build up. If you think AI might take control of the future within the next 15 years, you don’t have much time to build skills in the first half of your career and exercise power after you have 30 years of experience. There is an extreme sense of urgency, and I am not sure what’s the right response.
“we don’t have time” is only an argument for big gambles if they work. If ambition snowballs work better, then a lack of time is all the more reason not to waste time with vanity projects whose failures won’t even be educational.
I could steel man this as something of a lottery, where n% of people with way-too-big goals succeed and those successes are more valuable than the combined cost of the failures. I don’t think we’re in that world, because I think goals in the category I describe aren’t actually goals, they’re dreams, and by and large can’t succeed.
You could argue that’s defining myself into correctness and some big goals are genuinely goals even if they pattern match my criteria like “failure is uninformative” and “contemplating a smaller project is scary or their mind glances off the option (as opposed to being rejected for being too small)”. I think that’s very unlikely to be true for my exact critieria, but agree that in general overly broad definitions of fake ambition could do a lot of damage. I think creating a better definition people can use to evaluate their own goals/dreams is useful for that exact reason.
I also think that even if there are a few winning tickets in that lottery- people pushed into way-too-big projects that succeed- there aren’t enough of them to make a complete problem-solving ecosystem. The winning tickets still need staff officers to do the work they don’t have time for, or require skills inimical to swinging for the fences.
I should note that my target audience here is primarily “people attempting to engender ambition in others”, followed by “the people who are subject to those attempts”. I think engendering fake ambition is actively harmful, and the counterfactual isn’t “30 years in a suit”, it’s engendering ambition snowballs that lead to more real projects. I don’t think discouraging people who are naturally driven to do much-too-big projects is helpful.
I’d also speculate that if you tell a natural fence-swinger to start an ambition snowball, they end up at mind-bogglingly ambitious quickly, not necessarily slower than if you’d pushed them directly to dream big. Advice like “Do something that’s scary but at least 80% tractable” scales pretty well across natural ambition levels.
This is fantastic, and mirrors the method that has helped things work well in my own life.
Agreed.
I think that people should break down their goals, no matter how easy they seem, into easier and smaller steps, especially if they feel lazy. Laziness appears when we feel like we need to do tasks that seem unecessary for us, even when we know that they’re necessary. One reason why they appear unecessary is their difficulty of achievement. Why exercise for 30 minutes per day if things are “fine” without that? As such, one way to deal with that is by taking whatever goal you have and breaking it down into a lot of easy steps. As an example, imagine that you want to write the theoretical part of your thesis. So, you could start by writing what is the topic, what questions you might want to research, what key uncertainties you have about those questions, then you search for papers in order to clarify those uncertainties, and so on, immediate step by step, until you finish your thesis. If a step seems difficult, break it down even more. That’s why I think that breaking down your goals into smaller and easier steps might help when you feel lazy.
Anyways, thanks for your quick take!
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post mentioned a lot of people and organizations, so it seemed like useful data.
I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic. This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.
Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.
It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time
Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k’s mention was negative.
I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I’m very glad I solicited comments, and found the process easier than expected.
Nice, thanks for keeping track of this and reporting on the data!! <3
No pressure to respond, but I’m curious how long it took you to find the relevant email addresses, send the messages, then reply to all the people etc.? I imagine for me, the main costs would probably be in the added overhead (time + psychological) of having to keep track of so many conversations.
Off the top of my head: in maybe half the cases I already had the contact info. In one or two cases cases one of beta readers passed on the info. For the remainder it was maybe <2m per org, and it turns out they all use info@domain.org so it would be faster next time.
There’s a thing in EA where encouraging someone to apply for a job or grant gets coded as “supportive”, maybe even a very tiny gift. But that’s only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn’t a natural fit for, because “it’s quick and there are few applicants”. This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn’t the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I’m not mad at you personally].
I’ve been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone “yeah you’re probably not good enough”.
A lot of EA job postings encourage people to apply even if they don’t think they’re a good fit. I expect this is done partially because orgs genuinely don’t want to lose great applicants who underestimate themselves, and partially because it’s an extremely cheap way to feel anti-elitist.
I don’t know what the solution is here. Many people are miscalibrated on their value or their competition, all else being equal you do want to catch those people. But casting wider net entails more bycatch.
It’s hard to accuse an org of being mean to someone who they encouraged to apply for a job or grant. But I think that should be in the space of possibilities, and we should put more emphasis on invitations to apply for jobs/grants/etc being clear, and less on welcoming. This avoids wasting the time of people who were predictably never going to get the job.
I think this falls into a broader class of behaviors I’d call aspirational inclusiveness.
I do think shifting the relative weight from welcoming to clear is good. But I’d frame it as a “yes and” kind of shift. The encouragement message should be followed up with a dose of hard numbers.
Something I’ve appreciated from a few applications is the hiring manager’s initial guess for how the process will turn out. Something like “Stage 1 has X people and our very tentative guess is future stages will go like this”.
Scenarios can also substitute in areas where numbers may be misleading or hard to obtain. I’ve gotten this from mentors before, like here’s what could happen if your new job goes great. Here’s what could happen if your new job goes badly. Here’s the stuff you can control and here’s the stuff you can’t control.
Something I’ve tried to practice in my advice is giving some ballpark number and reference class. I tell someone they should consider skilling up in hard area or pursuing competitive field, then I tell them I expect success in <5% of people I give the advice to, and then say you may still want to do it because of certain reasons
Yes, it’s all very noisy. But numbers seem far far better than expecting applicants to read between the lines on what a heartwarming message is supposed to mean, especially early-career folks who would understandably assign a high probability of success with it
Oh I like this phrase a lot
Yeah this sounds right.
One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn’t do X than telling them they should (including a recent incidence which was rather personally costly). And I think I’m much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can’t be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it’s easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I’m talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who’ve told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That’s even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they’ve conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another’s entire career trajectory are themselves likely overconfident.
Another example is AI safety. I’ve talked to dozens of aspiring AI safety researchers who’ve felt very discouraged An illusory consensus thrust upon them that their work was essentially worthless because it didn’t superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I’ve met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of ‘follow the leader’ not even the “leaders” would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn’t
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, “lol, turns out we didn’t know what we were doing with alignment the whole time, we’re definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI.” I feel vindicated in my skepticism of the quality of the judgement of many of our peers.
Thanks for this post, as I’ve been trying to find a high-impact job that’s a good personal fit for 9 months now. I have noticed that EA organizations use what appears to be a cookie-cutter recruitment process with remarkable similarities across organizations and cause areas. This process is also radically different from what non-EA nonprofit organizations use for recruitment. Presumably EA organizations adopted this process because there’s evidence behind its effectiveness but I’d love to see what that evidence actually is. I suspect it privileges younger, (childless?) applicants with time to burn, but I don’t have data to back up this suspicion other than viewing the staff pages of EA orgs.
Can you say more about cookie-cutter recruitment? I don’t have a good sense of what you mean here.
I think solving this is tricky. I want hiring to be efficient, but most ways hiring orgs can get information take time, and that’s always going to be easier for people with more free time. I think EA has an admirable norm of paying for trials and deserves a lot of credit for that.
One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying—this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there’s a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.
Another solution I’ve been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is very very quick (eg “drop in your Linkedin profile and 2 sentences about why you’re a good fit”).
I’d be interested in seeing some organizations try out the very very quick method. Heck, I’d be willing to help set it up and trial run it. My rough/vague perception is that a lot of the information in a job application is superfluous.
I also remember Ben West posting some data about how a variety of “how EA is this person” metrics held very little predictive value in his own hiring rounds.
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well.
One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I’m much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that’s with them.Relative to the rest of the world they’re much less likely to take investigating other options personally.
Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it’s with them). But trials are so much more informative than interviews, I can’t imagine hiring for or accepting a long-term job without one.
Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people.
EA also has an admirable norm of paying for trials, which no one does for interviews.
The impression I get from the interview paradigm vs work trial paradigm is: so much of today’s civilization is less than 100 years old, and really big transformations happen during each decade. The introduction of work trials is one of those things.
Two popular responses to FTX are “this is why we need to care more about honesty” and “this is why we need to not do weird/sketchy shit”. I pretty strongly believe the former. I can see why people would believe the latter, but I worry that the value lost is too high.
But I think both side can agree that representing your weird/sketchy thing as mundane is highly risky. If you’re going to disregard a bunch of the normal safeguards of operating in the world, you need to replace them with something, and most of those somethings are facilitated by honesty.
None of my principled arguments against “only care about big projects” have convinced anyone, but in practice Google reorganized around that exact policy (“don’t start a project unless it could conceivably have 1b+ users, kill if it’s ever not on track to reach that”) and they haven’t grown an interesting thing since.
My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.
Can you link to a source about this?
the policy was commonly announced when I worked at google (2014), I’m sure anyone else who was there at the time would confirm its existence. In terms of “haven’t grown anything since”, I haven’t kept close track but can’t name one and frequently hear people say the same.
I like the Google Pixels. Well specifically I liked 2 and 3a but my current one (6a) is a bit of a disappointment. My house also uses Google Nest and Chromecast regularly. Tensorflow is okay. But yeah, overall certainly nothing as big as Gmail or Google Maps, never mind their core product.
Google was producing the Android OS and its own flagship phones well before the Pixel, so I consider it to predate my knowledge of the policy (although maybe the policy started before I got there, which I’ve now dated to 4/1/2013)
Please send me links to posts with those arguments you’ve made, as I’ve not read them, though my guess would be that you haven’t convinced anyone because some of the greatest successes in EA started out so small. I remember the same kind of skepticism being widely expressed some projects like that.
Rethink Priorities comes to mind as one major example. The best example is Charity Entrepreneurship. It was not only one of those projects with potential scalability that was doubted. It keeps incubating successful non-profit EA startups for across almost every EA-affiliated cause. CE’s cumulative track record might the best empirical argument against the broad applicability to the EA movement of your own position here.
Your comment makes the most sense to me if you misread my post and are responding to exactly the opposite of my position, but maybe I’m the one misreading you.
Upvoted. Thanks for clarifying. The conclusion to your above post was ambiguous to me, though I now understand.
People talk about running critical posts by the criticized person or org ahead of time, and there are a lot of advantages to that. But the plans I’ve seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
What I’d like to see is some reciprocal obligation from recipients of criticism, especially formal organizations with many employees. Things like answering questions from potential critics very early in the process, with a certain level of speed and reliability. Right now it feels like orgs are very fast to respond to polished, public posts, but you can’t count on them to even answer questions. They’ll respond quickly to public criticism, and maybe even to polished posts sent to them before publication, but they are not fast or reliable at answering questions with implicit potential criticism behind them. Which is a pretty shitty deal for the critic, who I’m sure would love to find out their concern was unmerited before spending dozens of hours writing a polished post.
This might be unfair. I’m quite sure it used to be true, but a lot of the orgs have professionalized over the years. In which case I’d like to ask they make their commitments around this public and explicit, and share them in the same breath that they ask for heads up on criticism.
I see a pretty important benefit to the critic, because you’re ensuring that there isn’t some obvious response to your criticisms that you are missing.
I once posted something that revised/criticized an Open Philanthropy model, without running it by anyone there, and it turned out that my conclusions were shifted dramatically by a coding error that was detected immediately in the comments.
That’s a particularly dramatic example that I don’t expect to generalize, but often if a criticism goes “X organization does something bad” the natural question is, why do they do that? Is there a reason that’s obvious in hindsight that they’ve thought about a lot, but I haven’t? Maybe there isn’t, but I would want to run a criticism by them just to see if that’s the case.
I don’t think people are obligated to build in the feedback they get extensively if they don’t think it’s valid/their point still stands.
This seems like a good argument against not asking, but a bad argument against getting people information as early as possible.
I don’t have any disagreement with getting people information early, I just think characterizing the current system as one where only the criticizee benefits is wrong.
A few benefits I see to the critic even in the status quo:
The post generally ends up stronger, because it’s more accurate. Even if you only got something minor wrong, readers will (reasonably!) assume that if you’re not getting your details right then they should pay less attention to your post.
To the extent that the critic wants the public view to end up balanced and isn’t just trying to damage the criticizee, having the org’s response go live at the same time as the criticism helps.
If the critic does get some things wrong despite giving the criticizee the opportunity to review and bring up additional information, either because the criticizee didn’t mention these issues or refused to engage, the community would generally see it as unacceptable for the crtiticizee to sue the critic for defamation. Whereas if a critic posts damaging false claims without that (and without a good reason for skipping review, like “they abused me and I can’t sanely interact with them”) then I think the law is still on the table.
A norm where orgs need to answer critical questions promptly seems good on it’s face, but I’m less sure in practice. Many questions take far more effort to answer well than they do to pose, especially if they can’t be answered from memory. Writing a ready-to-go criticism post is a way of demonstrating that you really do care a lot about the answer to this question, which might be needed to keep the work in answering not-actually-that-important questions down? But there could be other ways?
You’re not wrong, but I feel like your response doesn’t make sense in context.
Handled vastly better by being able to reliably get answers about concerns earlier.
Assumes things are on a roughly balanced footing and unanswered criticism pushes it out of balance. If criticism is undersupplied for large orgs, making it harder makes things less balanced (but rushed or bad criticism doesn’t actually fix this, now you just have two bad things happening)
I’m asking the potential criticizee to provide that information earlier in the process.
A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them.
The main list
The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.
Faunalytics
Faunalytics is a data analysis firm focused on metrics related to animal suffering. I searched high and low for health data on vegans that included ex-vegans, and they were the only place I found anything that had any information from ex-vegans. They shared their data freely and offered some help with formatting, although in the end it was too much work to do my own analysis.
I do think their description minimized the problems they found. But they shared enough information that I could figure that out rather than relying on their interpretation, and that’s good enough.
ALLFED
EA is trend-following to unfortunate degrees. ALLFED picked the important but unsexy target of food security during catastrophes, and has been steadfastly pursuing it for 7 years.
Charity Entrepreneurship pilot fund
CE runs a bootcamp/incubator that produces several new charities each run. I don’t think every project that comes out of this program is gold. I don’t even know of any projects that make me go “yes, definitely amazing”. But they are creating founder-talent where none existed before, and getting unsexy projects implemented, and building up their own skill in developing talent.
CE recently opened up the funding circle for their incubated projects.
Exotic Tofu Project, perhaps dependent on getting a more charismatic co-founder
I was excited by George Stiffman’s original announcement of a plan to bring unknown Chinese tofus into the west and marketing it to foodie omnivores as desirable and fun. This was a theory of change that could work, and that avoided the ideological sirens that crash so many animal suffering projects on the rocks.
Then he released his book. It was titled Broken Cuisine: Chinese tofu, Western cooking, and a hope to save our planet. The blurb opens with “Our meat-based diets are leading to antibiotic-resistant superbugs, runaway climate change, and widespread animal cruelty. Yet, our plant-based alternatives aren’t appealing enough. This is our Broken Cuisine. I believe we must fix it.” . This is not a fun and high status leisure activity: this is doom and drudgery, aimed at the already-converted. I think doom and drudgery strategies are oversupplied right now, but I was especially sad to see this from someone I thought understood the power of offering options that were attractive on people’s own terms.
This was supposed to be a list of projects who are underfunded due to lack of charisma; unfortunately this project requires charisma. I would still love to see it succeed, but I think that will require a partner who is good at convincing the general public that something is high-status. My dream is a charming, high-status reducetarian or ameliatarian foodie influencer, but just someone who understood omnivores on their own terms would be an improvement.
Impact Certificates
I love impact certificates as a concept, but they’ve yet to take off. They suffer from both lemon and coordination problems.They’re less convenient than normal grants, so impact certificate markets only get projects who couldn’t get funding elsewhere. And it’s a 2 or 3 sided market (producers, initial purchasers, later purchasers).
There are a few people valiantly struggling to make impact certificates a thing. I think these are worth funding directly, but it’s also valuable to buy impact certificates. If you don’t like the uncertainty of the project applications you can always be a secondary buyer, who are perhaps even rarer.
Projects I know doing impact certificates
Manifund. Manifund is the IC subproject of Manifold Market, which manifestly does not suffer from lack of extroversion in its founder. But impact certs are just an uphill battle, and private conversations with founder Austin Chen indicated they had a lot of room for more funding.
ACX Grants runs an impact certificate program, managed by Manifund.
Oops, the primary project I was thinking of has gone offline.
Ozzie Gooen/QURI
Full disclosure: I know Ozzie socially, although not so well as to put him in the Conflicts of Interest section.
Similar to ALLFED: I don’t know that QURI’s estimation tools are the most important project, but I do know Ozzie has been banging the drums on forecasting for years, way before Austin Chen made it cool, and it’s good for the EA ecosystem to have that kind of persistent pursuit in the mix.
Community Building
Most work done under the name “community building” is recruiting. Recruiting can be a fine thing to do, but it makes me angry to see it mislabeled in this, while actual community building starves. Community recruiting is extremely well funded, at least for people willing to frame their project in terms of accepted impact metrics. However if you do the harder part of actually building and maintaining a community that nourishes members, and are uncomfortable pitching impact when that’s not your focus, money is very scarce. This is a problem because
EA has an extractive streak that can burn people out. Having social support that isn’t dependent on validation for EA authorities is an important counterbalance.
The people who are best at this are the ones doing it for its own sake rather than optimizing for short-term proxies on long-term impact. Requiring fundees to aim at legible impact selects for liars and people worse at the job.
People driven to community build for its own sake are less likely to pursue impact in other ways. Even if you think impact-focus is good and social builders are not the best at impact, giving them building work frees up someone impact-focused to work on something else.
Unfortunately I don’t have anyone specific to donate to, because the best target I know already burnt out and quit. But I encourage you to be on the lookout in your local community. Or be the change I want to see in the world: being An Organizer is hard, but hosting occasional movie nights or proactively cleaning at someone else’s party is pretty easy, and can go a long way towards creating a healthy connected community.
Projects with conflicts of Interest
The first section featured projects I know only a little about. This section includes projects I know way too much about, to the point I’m at risk of bias.
Independent grant-funded researchers
Full disclosure: I am an independent, sometimes grant-funded, researcher.
This really needs to be its own post, but in a nutshell: relying on grants for your whole income sucks, and often leaves you with gaps or at least a lot of uncertainty. I’m going to use myself as an example because I haven’t run surveys or anything, but I expect I’m on the easier end of things.
The core difficulty with grant-financing: grantmakers don’t want to pay too far in advance. Grantmakers don’t want to approve new grants until you’ve shown results from the last grant. Results take time. Grant submissions, approval, and payout also take time. This means that, at best, you spend many months not knowing if you’ll have a funding gap, and many times the answer will be yes. I don’t know if this is the grantmaker’s fault, but many people feel pressure to ask for as little money as possible, which makes the gaps a bigger hardship
I get around this by contracting and treating grants as one client out of several, but I’m lucky that’s an option. It also means I spend time on projects that EA would consider unoptimal. Other problems: I have to self-fund most of my early work because I don’t want to apply for a grant until I have a reasonable idea of what I could hope to accomplish. There are projects I’ve been meaning to do for years, but are too big to self-fund, and too illegible and inspiration-dependent to survive a grant process. I have to commit to a project at application time but then not start until the application is approved, which could be months later.
All purpose funding with a gentle reapplication cycle would let independents take more risks at a lower psychological toll. Or test out Austin Chen’s idea of ~employment-as-a-service. Alas neither would help me right this second- illness has put me behind on some existing grant funded work, so I shouldn’t accept more money right now. But other independents could; if you know of any, please leave a pitch in the comments.
Lightcone
full disclosure: I volunteer and am very occasionally paid for work at Lightcone, and have deep social ties with the team.
Lightcone’s issue isn’t so much charisma as that the CEO is allergic to accepting money with strings, and the EA-offered money comes with strings. I like Lightcone’s work, and some of my favorite parts of their work would have been much more difficult without that independence.
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we’ve been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
what would fundraising mean here? is it for staffing, or donations to programs, or to your grantmakers to distribute as they seem fit?
i’ve been working at manifund for the last couple months, figured i’d respond where austin hasn’t (yet)
here’s a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we’re raising for.
tldr of that application:
core ops
staff salaries
misc things (software, etc)
programs like regranting, impact certificates, etc, for us to run how we think is best[1]
additionally, if a funder was particularly interested in a specific funding program, we’re also happy to provide them with infrastructure. e.g. we’re currently facilitating the ACX grants, we’re probably (70%) going to run a prize round for dwarkesh patel, and we’d be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn’t really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren’t manifund].
i’ll also add that we’re a less funding-crunched than when austin first commented; we’ll be running another regranting round, for which we’ll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.)
i’m keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we’re tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast.
we often charge a fee of 5% of the total funding; we’ve been paid $75k in commission to run the $1.5mm regranting round last year.
I probably would have had ALLFED and CE on a list like this had I written it (don’t know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren’t donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven’t researched any of the projects, and I’m definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn’t take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here’s why I personally haven’t donated to these projects in 2023, starting from the 2 you mention.
I’m not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to go right for research to be impactful:
The research needs to find “surprising”/”new” impactful interventions (or show that existing top interventions are surprisingly less cost-effective)
The research needs to be reliable and generally high quality
The research needs to be influential and decision-relevant for the right actors.
It’s really hard to evaluate each of the three as a non-expert. I would also be surprised if this was particularly neglected, as ALLFED is very famous in EA, and Denkenberger seems to have a good network. I also don’t know what more funding would lead to, and their track record is not clear to me after >6 years (but that is very much my ignorance; and because evaluating research is hard)
Charity Entrepreneurship/Ambitious Impact:
They’re possibly my favourite EA org (which is saying a lot; the bar is very high). I recommended allocating $50k to CE when I won a donor lottery. But because they’re so obviously cost-effective, if they ever have a funding need, I imagine tons of us would be really eager to jump in and help fill it. Including e.g. the EAIF. So, I personally would consider a donation to CE as counterfactually ~similar to a donation to the EAIF.
Regarding CE-incubated projects, I do donate a bit to them, but I personally believe that some of the medium-large donors in the CE seed network are very thoughtful and experienced grantmakers. So, I don’t expect the unfunded projects to be the most promising CE projects. Some projects like Healthier Hens do scale down due to lack of funding after some time, but I think a main reason in that case was that some proposed interventions turned out to not work or cost more than they expected. See their impact estimates.
Faunalytics:
They are super well known and have been funded by OpenPhil and the EA Animal Welfare Fund for specific projects, I defer to them. While they have been a ACE recommended charity for 8 years, I don’t know if the marginal dollar has more impact there compared to the other extremely impressive animal orgs.
Exotic Tofu:
It seems really hard to evaluate, Elizabeth mentions some issues, but in general my very uninformed opinion is that if it wouldn’t work as a for-profit it might be less promising as a non-profit compared to other (exceptional) animal welfare orgs.
Impact Certificates:
I think the first results weren’t promising, and I fear it’s mostly about predicting the judges’ scores since it’s rare to have good metrics and evaluations. That said, Manifund seems cool, and I made a $12 offer for Legal Impact for Chickens to try it out.[1] Since you donate to them and have relevant specific expertise, you might have alpha here and it might be worth checking out
QURI:
My understanding is that most of their focus in the past few years has been building a new programming language. While technically very impressive, I don’t fully understand the value proposition and after four years they don’t seem to have a lot of users. The previous QURI project www.foretold.io didn’t seem to have worked out, which is a small negative update. I’m personally more optimistic about projects like carlo.app and I like that it’s for-profit.
Edit: see the object-level response from Ozzie; the above is somewhat wrong and I expect other points about other orgs to be wrong in similar ways
Community Building:
I’m personally unsure about the value of non-impact-oriented community building. I see a lot of events like “EA Karaoke Night”, which I think are great but:
I’m not sure they’re the most cost-effective way to mitigate burnout
I think there are very big downsides in encouraging people to rely on “EA” for both social and economic support
I worry that “EA” is getting increasingly defined in terms of social ties instead of impact-focus, and that makes us less impactful and optimize for the wrong things (hopefully, I’ll write a post soon about this. Basically, I find it suboptimal that someone who doesn’t change their career, donate, or volunteer, but goes to EA social events, is sometimes considered closer to the quintessential “EA” compared to e.g. Bill Gates)
Independent grant-funded researchers:
See ALLFED above for why it’s hard for me to evaluate research projects, but mostly I think this obviously depends a lot on the researcher. But I think the point is about better funding methodology/infrastructure and not just more funding.
Lightcone:
I hear conflicting things about the dynamics there (the point about “the bay area community”). I’m very far from the Bay Area, and I think projects there are really expensive compared to other great projects. I also thought they had less of a funding need nowadays, but again I know very little.
Please don’t update much on the above in your decisions on which projects to fund. I know almost nothing about most of the projects above and I’m probably wrong. I also trust grantmakers and other donors have much more information, experience, and grantmaking skills; and that they have thought much more about each of the orgs mentioned. This is just meant to be an answer to “Does anyone have a good explanation for the praise-funding mismatch?” that basically is a bunch of guessed examples for: “many things can be very praise-worthy without being a great funding opportunity for many donors”
But I really don’t expect to have more information than the AWF on this, and I think they’ll be the judge, so rationally, I should probably just have donated the money to the AWF. I think I’m just not the target audience for this.
Quick notes on your QURI section:
“after four years they don’t seem to have a lot of users” → I think it’s more fair to say this has been about 2 years. If you look at the commit history you can see that there was very little development for the first two years of that time.
https://github.com/quantified-uncertainty/squiggle/graphs/contributors
We’ve spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we’ve focused on Squiggle)
Regarding users, I’d agree it’s not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you’ll see several EA groups who have used Squiggle.
We’ve been working with a few EA organizations on Squiggle setups that are mostly private.
I think for-profits have their space, but I also think that nonprofits and open-source/open organizations have a lot of benefits.
Thank you for the context! Useful example of why it’s not trivial to evaluate projects without looking into the details
Of course! In general I’m happy for people to make quick best-guess evaluations openly—in part, that helps others here correct things when there might be some obvious mistakes. :)
My thoughts were:
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I consider the possibility that a lot of ALLFED’s potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2⁄3 majority vote), one with this kind of potential impact model might do very well!
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don’t have much data to support this.
I don’t think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It’s plausible to me that an org in that position might be unusually good at rating highly on many donors’ charts, without being unusually good at rating at the very top of the donors’ lists:
There’s no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors’ scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
Hi Jason,
Here is why I do not recommend donating to ALLFED, for which I work as a contractor. If one wants to:
Minimise existential risk, one had better donate to the best AI safety interventions, namely the Long-Term Future Fund (LTFF).
Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell’s top charities.
Maximise nearterm human welfare in a robust way, one had better donate to GiveWell’s funds.
I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell’s funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
CEARCH estimated “the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity”. “The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse”. I guess CEARCH is overestimating cost-effectiveness (see my comments).
Maximise nearterm human welfare supporting interventions related to nuclear risk, one had better donate to Longview’s Nuclear Weapons Policy Fund.
My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCH estimating that lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell’s top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that of GiveWell’s top charities.
As always (unless otherwise stated), the views expressed here are my own, not those of ALLFED.
Some hypotheses:
I’m wrong, and they are adequately funded
I’m wrong and they’re not outstanding orgs, but discovering that takes work the praisers haven’t done.
The praise is a way to virtue signal, but people don’t actually put their money behind it.
The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
I’m not sure of CE’s funding situation, it was the incubated orgs that they pitched as high-need.
Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
OpenPhil has a particular grant cycle, maybe it doesn’t work for some orgs (at least not as their sole support).
I found this list very helpful, thank you!
On exotic tofu: I am not yet convinced that Stiffman doesn’t have the requisite charisma. Is your concern that he’s vegan (hence less relatable to non-vegans), his messaging in Broken Cuisine specifically, or something else? I am sympathetic to the first concern, but not as convinced by the second. In particular, from what little else I’ve read from Stiffman, his messaging is more like his original post on this Forum: positive and minimally doom-y. See, for example, his article in Asterisk, this podcast episode (on what appears to be a decently popular podcast?), and his newsletter.
Have you reached out to him directly about your concerns about his messaging? Your comments seem very plausible to me and reaching out seems to have a high upside.
I sent a message to George Stiffman through a mutual friend and never heard back, so I gave up after 2 pings (to the friend).
Thanks for mentioning places Stiffman comes across better. I’ve read the Asterisk article and found it irrelevant to his consumer-aimed work. Maybe the Bittman podcast is consumer-targeted and an improvement, I dunno. For now I can’t get over that book title and blurb.
Can you elaborate on what you mean by “the EA-offered money comes with strings?”
Not well. I only have snippets of information, and it’s private (Habryka did sign off on that description).
I don’t know if this specifically has come up in regards to Lightcone or Lighthaven, but I know Haybrka has been steadfastly opposed to the kind of slow, cautious, legally-defensive actions coming out of EVF. I expect he would reject funding that demanded that approach (and if he accepted it, I’d be disappointed in him, given his public statements).
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has happened according to certain standards and then pay out the impact credits (or carbon credits) associated with that standard.
That system seems more promising to us as it has all the advantages of impact certificate markets but also the advantage that one party (e.g., us) can do the legal battle in the US once for this impact credit (and can even rely on the precedent of carbon credits), and thereby pave the ground for all the other market participants that come after and don’t have to worry about the legalities anymore. There are already a number non-EA organizations that are working toward a similar vision.
Even outside such restrictive jurisdictions as the US, this system has the advantage that it allows for deeper liquidity on the impact credit markets (compared to the auctions for individual impact certificates). But the US is an important market for EA and AI safety, so we couldn’t just ignore it even if it hadn’t been for this added benefit.
We’ve started bootstrapping this system with GiveWiki in January of last year. But over the course of the year we’ve found it very hard to find anyone who wanted to use the system as a donor/grantmaker. Most of the grantmakers we were in touch with had lost their funding in Nov. 2022; others wanted to wait until the system is mature; and many smaller donors had no trouble finding great funding gaps without our help.
We will keep the platform running, but we’ll probably have wait for the next phase of funding overhang when there are more grantmakers and they actually have trouble finding their funding gaps.
(H/t to Dony for linking this thread to me!)
GiveWiki just looks a list of charities to me; what’s the additional thing you are doing?
Frankie made a nice explainer video for that!
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn’t have to register as a broker-dealer to be allowed to match startups to investors. I think they have some funds that are led by seasoned investors, and then the newbie investors can follow the seasoned ones by investing into their funds. Or some mechanism of that sort.
We’re probably not getting a no-action letter, and we don’t have the money yet to start the legal process to get our impact credits registered with the CFTC. So instead we recognized that in the above example investors are treating valuations basically like scores. So we’re just using scores for now. (Some rich people say money is just for keeping score. We’re not rich, so we use scores directly.)
The big advantage of actual scores (rather than using monetary valuations like scores) is that it’s legally easy. The disadvantage is that we can’t pitch GiveWiki to profit-oriented investors.
So unlike AngelList, we’re not giving profit-oriented investors the ability to follow more knowledgeable profit-oriented investors, but we’re allowing donors/grantmakers to follow more knowledgeable donors/grantmakers. (One day, with the blessing of the CFTC, we can hopefully lift that limitation.)
We usually frame this as a process of three phases:
Implement the equivalent of price discovery with a score. (The current state of GiveWiki.)
Pay out a play money currency according to the score.
Turn the play money currency into a real impact credit that can be sold for dollars (with the blessing of the CFTC).
Complaints about lack of feedback for rejected grants are fairly frequent, but it seems relevant that I can’t get feedback for my accepted grants or in-progress work. The most I have ever gotten was a 👍 react when I texted them “In response to my results I will be doing X instead of the original plan on the application”. In fact I think I’ve gotten more feedback on rejections than acceptances (or in one case, I received feedback on an accepted grant, from a committee member who’d voted to reject). Sometimes they give me more money, so it’s not that the work is so bad it’s not worth commenting on. Admittedly my grants are quite small, but I’m not sure how much feedback medium or even large projects get.
Acceptance feedback should be almost strictly easier to give, and higher impact. You presumably already know positives about the grant, the impact of marginal improvements is higher in most cases, people rarely get mad about positive feedback, and even if you share negatives the impact is cushioned by the fact that you’re still approving their application. So without saying where I think the line should be, I do think feedback for acceptances is higher priority than for rejections.
A relevant question here is “what would I give up to get that feedback?”. This is very sensitive to the quality of feedback and I don’t know exactly what’s on offer, but… I think I’d give up at least 5% of my grants in exchange for a Triplebyte-style short email outlining why the grant was accepted, what their hopes are, and potential concerns.
I have had that experience too. It seems grant work is pretty independent. I think it is worth emphasizing that even though you might not get much except a thumbs up, it is important to inform the grantmakers about changes in plans. Moreover, I think your way of doing it as a statement instead of as a question is a good strategy. I have also included something along the lines of “if you have concerns, questions or objections about my proposed change of plan, please contact me asap” so you firmly place the ball in the grantmakers’ court and that it seems fair to interpret a lack of response as an endorsement of your proposed changes.
As of October 2022, I don’t think I could have known FTX was defrauding customers.
If I’d thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so. I think I failed in an important way here, but I also don’t think my failure really hurt anyone, because I am such a small fish.
But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn’t keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments. I have enough friends of friends that have spoken out since the implosion that I’m quite sure that in a more open, information-sharing environment I would have gotten that information. And if I’d gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to the collapse of FTX. Assigning culpability is hard here, but this isn’t just an abstract worry: I can think of one person I might bear some responsibility for, and another who I would be almost 100% responsible for, except they didn’t get the grant.
I think the encouragement I gave people represents a moral failure on my part. I should have realized I didn’t have enough information to justify it, even if I never heard about specific bad behavior. Hell even if SBF wasn’t an unreliable asshole, Future Fund could have turned off the fire hose for lots of reasons. IIRC they weren’t even planning on continuing the regrantor project.
But it would also have been cool if that low key, “don’t rely on Sam- I’m not accusing him of anything malicious, he’s just not reliable” type of information had circulated widely enough that it reached me and the other very small fish, especially the ones taking major risks that only made sense in an environment where FTX money flowed freely.
I don’t know what the right way to do that would have been. But it seems important to figure out.
I also suspect that in an environment where it was easy to find out that SBF was an unreliable asshole, it would have been easier to discover or maybe even prevent the devastating fraud, because people would have felt more empowered to say no to him. But that might be wishful thinking.
I don’t know the specific circumstances of your or anyone else’s encouragement, so I want to be careful not to opine on any specific circumstances. But as a general matter, I’d encourage self-compassion for “small fish” [1] about getting caught up in “irrational exuberance.” Acting in the presence of suboptimal levels of information is unavoidable, and declining to act until things are clearer carries moral weight as well.
In retrospect, we know that the EA whispernet isn’t that reliable, that prominence in EA shouldn’t be seen as a strong indicator of reliability, that the media was asleep at the wheel, and that crypto investors exercise very minimial due dillgence. But I don’t think we should expect “small fish” to have known those things in 2021 and 2022.
As far as other potential failure modes, I think an intelligent individual doing their due dilligence before making a major life decision would have spotted those risks. It would have been easy to find out (without relying on inside EA knowledge) that anything crypto is risky as hell, that anything involving a company that has only been in operation a few years is pretty risky on top of that, that SBF didn’t have a long track record of consistent philantrophy at this level, and (in most cases) that the grants were fairly short-term with no guarantee of renewal.
Given that we should be gentle and understanding toward small fish who relied on FTX funding to their detriment, I would extend similar gentleness and understanding toward small fish who encouraged others (at least without actively downplaying risks). So I think there’s a difference between encouragement that actively downplayed those risks, and encouragement that did not affirmatively include a recognition of the relatively high risk.
I directionally agree, but think it is important to recognize that the signal of SBF’s unreliability would likely be contained in a sea of noise of inaccurate information, accurate information that didn’t predict future bad outcomes, and outright malicious falsehoods. A rational individual would discount the reports, with the degree of discount based on the signal:noise ratio of the whispernet and other factors.
Given the reasons FTX funding could fall through for reasons unrelated to SBF’s character or reliability, I predict that such a signal getting out would have had a meaningful—yet fairly modest—effect on proper risk evaluation by a small fish. For sure, an increase from 30% to 40% risk [making numbers up, but seems roughly plausible?] would have changed some decisions at the margin (either to accept grants or not to take more precautions).
But we would also need to weigh that other decisions would have changed for the worse because of noise in the whispernet about other people. While the tradeoff can be mitigated to some extent, I think it is largely inherent to whispernets and most other reputation-based systems. I generally think that assessing and communicating this sort of risk is very difficult, and that some sort of system for ameliorating the situation of people who get screwed is therefore a necessary piece of the solution. To me, this is similiar to how a rational response to the risk of fire includes both fire prevention (being aware of and mitigating risks) and fire insurance (because prevention is not a foolproof process).
I’m not intending to offer an opinion about larger fish, or people with more direct information about SBF, either way.
I think expecting myself to figure out the fraud would be unreasonable. As you say, investors giving him billions of dollars didn’t notice, why should I, who received a few tens of thousands, be expected to do better due diligence? But I think a culture where this kind of information could have bubbled up gradually is an attainable and worthwhile goal.
E.g. I think my local community handled covid really well. That didn’t happen because someone wrote a big scary announcement. It was an accumulation of little things, like “this is probably nothing but always good to keep a stock of toilet paper” and “if this is airborne masks are probably useful”. And that could happen because those small statements were allowed. And I think it would have been good if people could similarly share small warnings about SBF as casually as they shared good things, and an increasingly accurate picture would emerge over time.
Am I understanding right that the main win you see here would have been protecting people from risks they took on the basis that Sam was reasonably trustworthy?
I also feel pretty unsure but curious about whether a vibe of “don’t trust Sam / don’t trust the money coming through him” would have helped discover or prevent the fraud—if you have a story for how it could have happened (e.g. via as you say people feeling more empowered to say no to him—maybe it would have via been his staff making fewer crazy moves on his behalf / standing up to him more?), I’d be interested.
“protect people from dependencies on SBF” is the thing for which I see a clear causal chain and am confident in what could have fixed it.
I do have a more speculative hope that an environment where things like “this billionaire firehosing money is an unreliable asshole” are easy to say would have gotten better outcomes for the more serious issues, on the margin. Maybe the FTX fraud was overdetermined, even if it wasn’t and I definitely don’t have enough insight to be confident in picking a correction. But using an abstract version of this case as an example for how I think a more open environment could have led to better outcomes:
My sense is SBF just kept taking stupid unethical bets and having them work out for him financially and socially. Maybe small consequences early on would have reduced the reward to stupid unethical bets.
Before the implosion, SBF(’s public persona) was an EA success story that young EAs aspired to copy. Less of that on the margin would probably lead to less fraud 5 years from now, especially in the world where the FTX fraud took longer to discover.
I think aping SBF’s persona was bad for other reasons, but they’re harder to justify.
SBF would have gotten more push back from staff (unless the fact that he was a known asshole made people more likely to leave, which seems good for them but not an improvement vis a vis fraud).
FTX would have had a harder time recruiting, which would have slowed them down.
Some EAs chose to trade on FTX out of ingroup loyalty, and maybe that would have happened less.
An environment where you’re free to share information about SBF being an unreliable asshole is more hospitable to sharing and hearing other negative information, and this has a snowball effect. Who knows what else would have been shared if the door had been open a crack.
Maybe Will MacAskill would have spent less time telling the press that SBF was a frugal virtue wunderkind.
Maybe other people would have told Will MacAskill to stop telling the press that SBF was a frugal virtue wunderkind.
Maybe Will MacAskill would have pushed that line to the press, but other people would have told the press “no he isn’t”, and that could have been a relatively gentle lesson for SBF and Will.
My sense is Will isn’t the only prominent EA who gave SBF a lot of press, just the most prominent and the one I heard the most about. Hopefully all of that would be reduced.
Maybe people would have been more open when considering if FTX money was essentially casino money, and what are the ethical implications of that?
Good posts generate a lot of positive externalities, which means they’re undersupplied, especially by people who are busy and don’t get many direct rewards from posting. How do we fix that? What are rewards relevant authors would find meaningful?
Here are some possibilities off the top of my head, with some commentary. My likes are not universal and I hope the comments include people with different utility functions.
Money. Always a classic, rarely disliked although not always prioritized. I’m pretty sure this is why LTFF and EAIF are writing more now.
Appreciation (broad). Some people love these. I definitely prefer getting them over not getting them, but they’re not that motivating for me. Their biggest impact on motivation is probably cushioning the blow of negative comments.
Appreciation (specific). Things like “this led to me getting my iron tested” or “I changed my mind based on X”. I love these, they’re far more impactful than generic appreciation.
High quality criticism that changes my mind.
Arguing with bad commenters.
One of the hardest parts of writing for me is getting a shitty, hostile comment, and feeling like my choices are “let it stand” or “get sucked into a miserable argument that will accomplish nothing”. Commenters arguing with commenters gets me out of this dilemma, which is already great, but then sometimes the commenters display deep understanding of the thing I wrote and that’s maybe my favorite feeling.
Deliberately not included: longer term rewards like reputation that can translate into jobs, employees, etc. I’m specifically looking for quick rewards for specific posts
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP’s Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don’t receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people’s attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
Whether the age at which one first engaged with EA predicts lower / higher future engagement with EA
A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.
Of course, much of our privately commissioned research is private, such that we couldn’t post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can’t do purely due to lack of funding. This includes:
More message testing research related to EA /longtermism (for an example see Will MacAskill’s comment referencing our work here), including but not limited to:
Testing the effectiveness of specific arguments for these causes
Testing how “longtermist” or “existential risk” or “effective altruist” or “global priorities” framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without
Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies
“We want to publish but can’t because the time isn’t paid for” seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”
I think we need to be a bit careful with this, as I saw many highly upvoted posts that in my opinion have been
activelyharmful. Some very clear examples:Theses on Sleep, claiming that sleep is not that important. I know at least one person that tried to sleep 6 hours/day for a few weeks after reading this, with predictable results
A Chemical Hunger, “a series by the authors of the blog Slime Mold Time Mold (SMTM) that has been received positively on LessWrong, argues that the obesity epidemic is entirely caused by environmental contaminants.” It wouldn’t surprise me if it caused several people to update their diets in worse ways, or in general have a worse model of obesity
In general, I think we should promote more posts like “Veg*ns should take B12 supplements, according to nearly-unanimous expert consensus” while not promoting posts like “Veg*nism entails health tradeoffs”, when there is no scientific evidence of this and expert consensus of the contrary. (I understand that your intention was not to claim that a vegan diet was worse than an average non-vegan diet, but that’s how most readers I’ve spoken to updated in response to your posts.)
I would be very excited about encouraging posts that broadcast knowledge where there is expert consensus that is widely neglected (e.g. Veg*ns should take B12 supplements), but I think it can also be very easy to overvalue hard-to-measure benefits, and we should keep in mind that the vast majority of posts get forgotten after a few days.
I think you are incorrectly conflating being mistaken and being “actively harmful” (what does actively mean here?) I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
Truth-seeking is a long game that is mostly about people exploring ideas, not about people trying to minimize false beliefs at each individual moment.
That’s a fair point, I listed posts that were clearly not only mistaken but also harmful, to highlight that the cost-benefit analysis of “good posts” as a category is very non-obvious.
I shouldn’t have used the term “actively”, I edited the comment.
I fear that there’s a very real risk of building castles in the sky, where interesting true information gets mixed with interesting not-so-true information and woven into a misleading narrative that causes bad consequences, that this happens often, and that we should be mindful of that.
I should have explicitly mentioned it, but I mostly agree with Elizabeth’s quick take. I just want to highlight that while some “good posts” “generate a lot of positive externalities”, many other “good posts” are wrong and harmful (and many many more get forgotten after a few days). I’m also probably more skeptical of hard-to-measure diffuse benefits without a clear theory of change or observable measures and feedback loops.
LessWrong’s emoji palette is great
That palette is not just great in the abstract, it’s great as a representation of LW. I did some very interesting anthropology with some non-rationalist friends explaining the meaning and significance of the weirder reacts.
A lot of what I explained was how specific reacts relate to one of the biggest pain points on LW (and EAF): shitty comments. The reacts are weirdly powerful, in part because it’s not the comments’ existence that’s so bad, it’s knowing that other people might read them and not understand they are shitty. I could explain why in a comment of my own, but that invites more shitty comments and draws attention to the original one. It’s only worth it if many people are seeing and believing the comment.
Emojis neatly resolve this. If several people mark a comment as soldier mindset, I feel off the hook for arguing with it. And if several people (especially people I respect) mark a comment as insightful or changing their mind, that suggests that at a minimum it’s worth the time to engage with the comment, and quite possibly I am in the wrong.
You might say I should develop a thicker skin so shitty comments bug me less, and that is probably be true on the margin, but I think it’s begging the question. Emojis give me valuable information about how a comment is received; positive emojis suggest I am wrong that it is shitty, or at least how obvious it is. It is good to respond differently to good comments, obviously shitty comments, and controversial comments differently, and detailed reacts make it much easier. So I think this was a huge win for LessWrong.
Meanwhile on the EAForum…
[ETA 2023-11-10: turns out I picked a feel-good thread with a special react palette to get my screen shots. I still think my point holds overall but regret the accidental exaggeration. I should have been more surprised when I went to get a screen shot and the palette wasn’t what I expected]
This palette has 5 emojis (clapping, party, heart, star, and surprise) covering maybe 2.5 emotions if you’re generous and count heart as care and not just love. It is considerably less precise than Facebook’s palette. I suspect the limited palette is an attempt to keep things positive, but given that negative comments are (correctly) allowed, this only limits the ability to cheaply push back.I’ll bet EAF put a lot of thought into their palette. This isn’t even their first palette, I found out the palette had changed when I went to get a screen shot for this post. I would love to hear more about why they chose these 5.This is not quite as bad as the feel-good palette (I’m so sorry!), but I still think leaves a tremendous amount of value on the table. It gives no way to give specific negative feedback to bad comments like “too combative” or “misunderstands position?” . It’s not even particularly good at compliments, except for “Changed my mind”.
How common do you think “shitty comments” are? And how well/poorly do you think the existing karma system provides an observer with knowledge that the user base “understand[s] they are shitty”? (To be sure, it doesn’t tell you if the voting users understand exactly why the comment is shitty.)
I’m not sure how many people would post attributed-to-them emojis if they weren’t already anonymously downvoting a comment for being shitty. So if they aren’t already getting significant downvotes, I don’t know how many negative emojis they would get here.
They’re especially useful for comments of mixed quality- e.g. someone is right and making an important point, but too aggressively. Or a comment is effortful, well-written, and correct within its frame, but fundamentally misunderstood your position. Or, god forbid, someone make a good point and a terrible point in the same comment. I was originally skeptical of line-level reacts but ended up really valuing them due to that.
There’s also reacts like “elaborate”, “taboo this word” and “example” that invite a commenter to correct problems, at which point the comment may become really valuable. Unfortunately there’s no notifications for reacts so this can really easily go unnoticed, but I it at least raises the option.
If I rephase your question as “how often do I see comments for which reacts convey something important I couldn’t say with karma?”; most of my posts since reacts came out have been controversial, so I’m using many comment reacts per post (not always dismissively).
I also find positive emojis much more rewarding than karma, especially Changed My Mind.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I don’t want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a “condensed” or “extended” emoji palette? Either way, just my two cents.
I agree EAF shouldn’t have a LW-sized palette, much less LW’s specific palette. I want EAF to have a palette that reflects its culture as well as LW’s palette reflects its culture. And I think that’s going to take more than 4 reacts (note that my original comment mortifyingly used a special palette made for a single post, the new version has the normal EAF reacts of helpful, insightful, changed my mind, and heart), but way less than is in the LW palette.
I do think part of LessWrong’s culture is preferring to have too many options rather than making do with the wrong one. I know the team has worked really hard to keep reacts to a manageable level, while making most of them very precise, while covering a wide swath of how people want to react. I think they’ve done an admirable job (full disclosure: I’m technically on the mod team and give opinions in slack, but that’s basically the limit of my power). This is something I really appreciate about LW, but I know shrinks its audience.
I’m not on LW very often, how frequently do you see these emojis being used?
From a UX perspective, I agree with Akash—it seems like there are way too many options and my prior is that people wouldn’t use >80% of them.
Hi! I think we might have a bug — I’m not sure where you’re seeing those emojis on the Forum. For me, here are the emojis that show up:
@Agnes Stenlund might be able to say more about how we chose those,[1] but I do think we went for this set as a way to create a low-friction way of sharing non-anonymous positive feedback (which authors and commenters have told us they lack, and some have told us that they feel awkward just commenting with something non-substantive but positive like “thanks!”) while also keeping the UX understandable and easy to use. I think it’s quite possible that it would be better to also add some negative/critical emojis, but I’m not very convinced right now and not convinced that it’s super promising relative to the others stuff we’re working on, & something we should dive deeper into. It won’t be my call in the end, regardless, but I’m definitely up for hearing arguments about why this is wrong!
I don’t view this as a finalized set — I think there’s a >50% chance (75%?) that we’ve changed at least something about it in the next ~6 months.
Not a bug—it’s from Where are you donating this year, and why? which is grandfathered into an old experimental voting system (and it’s the only post with this voting system—there are a couple of others with different experimental systems).
I’m so sorry- I should have been more surprised when I went to get a screenshot and it wasn’t the palette I expected. I have comments set to notify me only once per day, so I didn’t get alerted to the issue until now.
I wrote this with the standard palette so I still think there is a problem, but I feel terrible for exaggerating it with a palette that was perfectly appropriate for its thread.
Semi-tangential question: what’s the rationale for making the reactions public but the voting (including the agree/disagree voting) anonymous?
Where are you seeing that emoji palette on here?
See sister thread- this was for a specific positivity focused thread I picked completely at random 😱.
As Ollie mentioned, I made the set you referenced for just this one thread. As far as I remember it was meant to to support positive vibes in that thread and was done very quickly, so I would not say a lot of thought went into that palette.
@Lizka and co: could I ask for some commentary on this?
A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).
I appreciate the kudos here, but feel like I should give more context.
I think some of what led to me to renegotiate was a stubborn streak and righteousness about truth. I mostly hear when those traits annoy people, so it’s really nice to have them recognized in a good light here. But that righteous streak was greatly enabled by the fact that my mom is a lawyer who modeled reading legal documents before signing (even when it’s embarrassing your kids who just want to join their friends at the rockclimbing birthday party), and that I could afford to forgo severance. Obviously I really wanted the money, and I couldn’t afford to take this kind of stand every week. But I believe there were people who couldn’t even afford to add a few extra days, and so almost had to cave
To the extent people in that second group were unvirtuous, I think the lack of virtue occurred when they didn’t create enough financial slack to even have the time to negotiate. By the time they were laid off without a cushion it was too late. And that’s not available to everyone- Wave paid well, but emergencies happen, any one of them could have a really good reason their emergency fund was empty.
So the main thing I want to pitch here is that “getting yourself into a position where virtue is cheap” is an underrated strategy.
This is one benefit to paying people well, and a reason having fewer better-paid workers is sometimes better than more people earning less money. If your grants or salary give you just enough to live as long as the grants are immediately renewed/you don’t get fired, even a chance of irritating your source of income imperils your ability to feed yourself. 6 months expenses in savings gives you the ability to risk an individual job/grant. Skills valued outside EA give you the ability to risk pissing off all of EA and still be fine.
I’m emphasizing risk here because I think it’s the bigger issue. If you know something is wrong, you’ll usually figure out a way to act on it. The bigger problem is when you some concerns but they legitimately could be nothing, but worry that investigating will imperil your livelihood.
I sometimes argue against certain EA payment norms because they feel extractive, or cause recipients to incur untracked costs. E.g. “it’s not fair to have a system that requires unpaid work, or going months between work in ways that can’t be planned around and aren’t paid for”. This was the basis for some of what I said here. But I’m not sure this is always bad, or that the alternatives are better. Some considerations
if it’s okay for people to donate money I can’t think of a principled reason it’s not okay for them to donate time → unpaid work is not a priori bad.
If it would be okay for people to solve the problem of gaps in grants by funding bridge grants, it can’t be categorically disallowed to self-fund the time between grants.
If partial self-funding is required to do independent, grant-funded work, then only people who can afford that will do such work. To the extent the people who can’t would have done irreplaceably good work, that’s a loss, and it should be measured. And to the extent some people would personally enjoy doing such work but can’t, that’s sad for them. But the former is an empirical question weighed against the benefits of underpaying, and the latter is not relevant to impact.
I think the costs of blocking people who can’t self-fund from this kind of work are probably high, especially the part where it categorically prevents segments of society with useful information from participating. But this is much more relevant for e.g. global development than AI risk.
A norm against any unpaid work would mean no one could do anything unless they got funder approval ahead of time, which would be terrible.
A related problem is when people need to do free work (broadly defined, e.g. blogging counts) to get a foot in the door for paid work. This has a lot of the same downsides as requiring self-funding, but, man, seems pretty stupid to insist on ignoring the information available from free sources, and if you don’t ban it there will be pressure to do free work.
To me, “creating your own projects, which people use to inform their opinions of you” feels pretty different from “you must do 50 hours of task X unpaid before we consider a paying position”, but there are ambiguous cases.
it’s pretty common for salaried EAs to do unpaid work on top of their normal job. This feels importantly different to me from grant-funded people funding their own bridge loans, because of the job security and predictability. The issue isn’t just “what’s your take home pay per hour?”, it’s “how much ability to plan do you have?”
Any money you spend on one independent can’t be spent on someone else. To the extent EA is financially constrained, that’s a big cost.
It feels really important to me that costs of independence, like self-bridge-funding or the headache of grant applications, get counted in some meaningful sense, the same as donating money or accepting a low salary.
I feel like a lot of castle discourse missed the point.
By default, OpenPhil/Dustin/Owen/EV don’t need anyone’s permission for how they spend their money.
And it is their money, AFAICT open phil doesn’t take small donations. I assume Dustin can advocate for himself here.
One might argue that the castle has such high negative externalities it can be criticized on that front. I haven’t seen anything to convince me of that, but it’s a possibility and “right to spend one’s own money” doesn’t override that.
You could argue OpenPhil etc made some sort of promise they are violating by buying the castle. I don’t think that’s true- but I also think the castle-complainers have a legitimate grievance.
I do think the word “open” conveys something of a promise, and I will up my sympathy for open phil if they change their name. But my understanding is they are more open than most foundations.
My guess is that lots of people entered EA with inaccurate expectations, and the volume at which this happens indicates a systemic problem, probably with recruiting. They felt ~promised that EA wasn’t the kind of place where people bought fancy castles, or would at least publicly announce they’d bought a retreat center and justify it with numbers.
Highly legible, highly transparent parts of EA exist, and I’m glad they do. But it’s not all of EA, and I don’t think it should be. I think it’s important to hold people to commitments, and open phil at one point did have a commitment to transparency, but they publicly renounced it years ago so that’s no longer in play. I think the problem lies with the people who set the false expectations, which I imagine happened in recruiting.
It’s hard for me to be more specific than this because I haven’t followed EA recruiting very closely, so what reaches me tends to be complaints about the worst parts. My guess is this lies in the more outward facing parts of Effective Ventures (GWWC, 80k, CEA’s university recruiting program, perhaps the formalization of EA groups in general).
[I couldn’t quickly verify this but my understanding is open phil provides a lot of the funding for at least some of these orgs, in which case it does bear some responsibility for the misleading recruiting]
I would like to see recruiting get more accurate about what to expect within EA. I want that partially because honesty is generally good, partially because this seems like a miserable experience for people who have been misled. And partially because I want EA to be a weird do-ocracy, and recruiting lots of people who object to doing weird things without permission slows that down.
I think the first point here—that the buyers “don’t need anyone’s permission” to purchase a “castle”—isn’t contested here. Other than maybe the ConcernedEA crowd, is anyone claiming that they were somehow required to (e.g.) put this to a vote?
I think the “right to spend one’s own money” in no way undermines other people’s “right to speak one’s own speech” by lambasting that expenditure. In the same way, my right to free speech doesn’t prevent other people from criticizing me for it, or even deciding not to fund/hire me if I were to apply for funding or a job. There are circumstances in which we have—or should have—special norms against negative reactions by third parties; for instance, no one should be retailiated against for reporting fraud, waste, abuse, harassment, etc. But the default rule is that what the critics have said here is fair game.
A feeling of EA having breached a “~promise[]” isn’t the only basis for standing here. Suppose a non-EA megadonor had given a $15MM presumably tax-deductible donation to a non-EA charity for buying a “castle.” Certainly both EAs and non-EAs would have the right to criticize that decision, especially because the tax-favored nature of the donation meant that millions’ worth of taxes were avoided by the donation. If one wishes to avoid most public scrutiny, one should make it clear that the donation was not tax-advantaged. In that case, it’s the same as the megadonor buying a “castle” for themselves.
Moreover, I think the level of negative externalities required to give third-party EAs standing to criticize is quite low. The “right to speak one’s own speech” is at least as fundamental as the proposed “right to spend one’s own money.” If the norm is going to be that third parties shouldn’t criticize—much less take adverse actions against—an EA entity unless the negative PR & other side effects of the entity’s action exceed those of the “castle” purpose, then that would seem a pretty fundamental shift in how things work. Because the magnitude of most entities’ actions—especially individuals—are generally an order of magnitude (or more) less than the magnitude of OP and EVF’s actions, the negative externalities will almost never meet this standard.
I 100% agree with you that people should be and are free to give their opinions, full stop.
Many specific things people said only make sense to me if they have some internal sense that they are owed a justification and input (example, example, example, example).
I almost-but-don’t-totally reject PR arguments. EA was founded on “do the thing that works not the thing that looks good”. EAs encourage many other things people find equally distasteful or even abhorrent, because they believe it does the most good. So “the castle is bad PR” is not a good enough argument, you need to make a case for “the castle is bad PR and meaningfully worse than these other things that are bad PR but still good”. I believe things in that category exist, and people are welcome to make arguments that the castle is one of them, but you do have to make the full argument.
I think you’re slightly missing the point of the ‘castle’ critics here.
Technically this is obviously true. And it was the main point behind one of the most popular responses to FTX and all the following drama. But I think that point and the post misses people’s concerns completely and comes off as quite tone-deaf.
To pick an (absolutely contrived) example, let’s say OpenPhil suddenly says it now believes that vegan diets are more moral and healthier than all other diets, and that B12 supplementation increases x-risk, and they’re going to funnel billions of dollars into this venture to persuade people to go Vegan and to drone-strike any factories producing B12. You’d probably be shocked and think that this was a terrible decision and that it had no place in EA.
OpenPhil saying “it’s our money, we can do what we want” wouldn’t hold much water for you, and the same thing I think goes for the Wytham Abbey critics—who I think do have a strong initial normative point that £15m counterfactually could do a lot of good with the Against Malaria Foundation, or Hellen Keller International.
Like it’s not just a concern about ‘high negative externalities’, many people saw this purchase, along with the lack of convincing explanation (to them), and think that this is just a negative EV purchase, and also negative with externalities—and then there was little in explanation forthcoming to change their mind.
I think OpenPhil maybe did this thinking it was a minor part of their general portfolio, without realising the immense power, both explicit and implicit, they have over the EA community, its internal dyanmics, and its external perception. They may not officially be in charge of EA, but by all accounts unofficially it works something like that (along with EVF), and I think that should at least figure into their decision-making somewhere
Is the retreat from transparency true? If there are some references you could provide me for this? I also feel like there’s a bit of ‘take-it-or-leave-it’ implicit belief/attitude from OpenPhil here if true which I think is unfortunate and, honestly, counterproductive.
I would like to see recruiting get more accurate about what to expect within EA, but I’m not sure what that would look like. I mean I still think that EA “not being the kind of place where people buy fancy castles” is is a reasonable thing to expect and want from EA overall? So I’m not sure that I disagree that people are entering with these kind of expectations, but I’m confused about why you think it’s innacurate? Maybe it’s descriptively inaccurate but I’m a lot less sure that it’s normatively inaccurate?
Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldn’t have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but haven’t seen any that I’ve found convincing.
My understanding is:
Open Phil deliberately doesn’t fill the full funding gap of poverty and health-focused charities.
While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
https://www.openphilanthropy.org/research/update-on-how-were-thinking-about-openness-and-information-sharing/
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesn’t mean the money couldn’t have been used profitably elsewhere in a similar organization.
are you sure GW has charities that meet their bar that they aren’t funding as much as they want to? I’m pretty sure that used to not be the case, although maybe it has changed. There’s also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But I’m pretty sure at one point they just didn’t have anywhere they wanted to give more money to, and I don’t know if that has changed.
2023: “We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving.”
Giving Season 2022: “We’ve set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled.”
July 2022: “we don’t expect to have enough funding to support all the cost-effective opportunities we find.” Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunities—“If other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, we’ll ask Open Philanthropy to donate to priorities further down our list. It won’t give less funding overall—it’ll just fund the next-highest-priority needs.”
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification “the community doesn’t own its donor’s money” as a defence for a donation/grant
On the negative externality point, maybe I didn’t make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of ‘was it a good donation or not’.
I take the point about Open Phil’s funding gap perhaps being the more fundamental/important issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. It’s a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. There’s a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. I’d be interested to hear what you think a more realistic ‘recruiting pitch’ to EA would look like, but don’t feel the need to spell that out if you don’t want.
I think a retreat center is a justifiable idea, I don’t have enough information to know if Wytham in particular was any good, and… I was going to say “I trust open phil” here, but that’s not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesn’t.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. They’re of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
This is the million dollar question. I don’t feel like I have an answer, but I can at least give some thoughts.
I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting.
Back in 2015 three different EA books came out- Singer’s The Most Good You Can Do, MacAskill’s Doing Good Better, and Nick Cooney’s How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskill’s book felt like he had all the answers and was giving the reader instructions, and Singer’s had the same issues. I wish EA recruiting looked more like Cooney’s book and less like MacAskill’s.
That’s a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and it’s been 8 years since I read it.
I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effective. It creates a mix of decision paralysis, excess deference, and pushes people into projects too ambitious for them to learn from, much less succeed at.
I’m interested in what Charity Entrepreneurship thinks we should do. They consistently incubate the kind of small, gritty projects I think make up the substrate of a healthy ecosystem. TBH I don’t think any of their cause areas are as impactful as x-risk, but succeeding at them is better than failing to influence x-risk, and they’re skill-building while they do it. I feel like CE gets that real work takes time, and I’d like to see that attitude spread.
(@Judith, @Joey would love to get your take here)
@Caleb Parikh has talked about how he grades people coming from “good” EA groups more harshly, because they’re more likely to have been socially pressured into “correct” views. That seems like a pretty bad state of affairs.
I think my EA group (seattle, 2014) handled this fantastically, there was a lot of arguing with each other and with EA doctrine. I’d love to see more things look like that. But that was made up heavily of adult rationalists with programming jobs, not college students.
Addendum: I just checked out Wytham’s website, and discovered they list six staff. Even if those people aren’t all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK they’re not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open phil’s problem to deal with, but it seemed good to give an update.
“I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting. ”—I’m interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, it’s based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is “drowning child in a shallow pond”, and the shallow pond smuggles in a bunch of assumptions that aren’t true for global health and poverty. Such as
“we know what to do”, “we know how to implement it”, and “the downside is known and finite”, which just don’t hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow haven’t been fully funded, the average person’s ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be “drowning child in a fast moving river when you don’t know how to swim”.
I think Peter Singer believes this so he’s not being inconsistent, I just think he’s wrong.
“you can fix this with a single action, after which you are done.” Solving poverty for even a single child is a marathon.
“you are the only person who can solve this”. I think there is something good about getting people to feel ownership over the problem and avoiding bystandard effect, but falsely invoking an analogy to a situation where that’s true is not the way to do it.
A single drowning child can be fixed via emergency action. A thousand drowning children scattered across my block, replenishing every day, requires a systemic fix. Maybe a fence, or draining the land. And again, the fight or flight mode suitable for saving a single child in a shallow pond is completely inappropriate for figuring out and implementing the systemic solution.
EA is much more about saying “sorry actively drowning children, I can more good by putting up this fence and preventing future deaths”.
When Singer first made the analogy clothes were much more expensive than they are now, and when I see the argument being made it’s typically towards people who care very little about clothes. What was “you’d make a substantial sacrifice if a child’s life was on the line” has become “you aren’t so petty as to care about your $30 fast fashion shoes, right?”. Just switching the analogy to “ruining your cell phone” would get more of the original intent.
I think this might be a good top level post—would be keen for you more people to see and discuss this point
Do people still care about drowning child analogy? Is it still used in recruiting? I’d feel kind of dumb railing against a point no one actually believed in.
I’m not sure (my active intro cb days were ~2019), but I think it is possibly still in the intro syllabus ? You could add a disclaimer at the top.
I will say I also never use the Drowning Child argument. For several reasons:
I generally don’t think negative emotions like shame and guilt are a good first impression/initial reason to join EA. People tend to distance themselves from sources of guilt. It’s fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
I prefer to minimise use of thought experiments/hypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally “doing good, better” in the sense of getting people to engage more thoughtfully with values they already hold.
Thanks that’s helpful!
Just a quick comment that I strong upvoted this post because of the point about violated expectations in EA recruitment, and disagree voted because it’s missing some important points of why EAs should be concerned about how OP and other EA orgs spend their EA money.
if you have the energy, I’d love to hear your disagreement on open phil or ownership of money.
I feel similarly to Jason and JWS. I don’t disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
Thanks, that is interesting and feels like it has conversational hooks I haven’t heard before.
What would it mean to say Open Phil was justified or not justified in being the de facto head of the community? I assume you mean morally justified, since it seems pretty logical on a practical level.
Supposing a large enough contingent of EA decided it was not justified; what then? I don’t think anyone is turning down funding for the hell of it, so giving up open phil money would require a major restructuring. What does that look like? Who drives it? What constitutes large enough?
Briefly in terms of soft and hard power:
Soft power
Deferring to OP
Example comment about how much some EAs defer to OP even when they know it’s bad reasoning.
OP’s epistemics are seen as the best in EA and jobs there are the most desirable.
The recent thread about OP allocating most of its neartermist budget to FAW and especially its comments shows much reduced deference (or at least more openly taking such positions) among some EAs.
As more critical attention is turned towards OP among EAs, I expect deference will reduce further. E.g. some of David Thorstad’s critical writings have been cited on this forum on this.
I expect this will continue happening organically, particularly in response to failures and scandals, and the castle played a role in reduced deference.
Hard power
I agree no one is turning down money willy-nilly, but if we ignore labels, how much OP money and effort actually goes into governance and health for the EA community, rather than recruitment for longtermist jobs?
In other words, I’m not convinced it would require restructuring or just structuring.
A couple of EAs I spoke to about reforms both talked about how huge sums of money are needed to restructure the community and it’s effectively impossible without a megadonor. I didn’t understand where they were coming from. Building and managing a community doesn’t take big sums of money and EA is much richer than most movements and groups.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
Of course this depends on what one’s vision for the EA community is.
What do you think?
The math suggests that the meta would look much different in this world. CEA’s proposed budget for 2024 is $31.4MM by itself, about half for events (mostly EAG), about a quarter for groups. There are of course other parts of the meta. There were 3567 respondents to the EA Survey 2022, which could be an overcount or undercount of the number of people who might join a fee-paying society. Only about 60% were full-time employed or self-employed; most of the remainder were students.
Maybe a leaner, more democratic meta would be a good thing—I don’t have a firm opinion on that.
To make sure I understand; this is an answer to “what should EA do if it decides OpenPhil’s power isn’t justified?” And the answer is “defer less, and build a grassroots community structure?”
I’m not sure what distinction you’re pointing at with structure vs. restructure. They both take money that would have to come from somewhere (although we can debate how much money). Maybe you mean OP wouldn’t actively oppose this effort?
To the first: Yup, it’s one answer. I’m interested to hear other ideas too.
Structure vs restructuring: My point was that a lot of the existing community infrastructure OP funds is mislabelled and is closer to a deep recruitment funnel for longtermist jobs rather than infrastructure for the EA community in general. So for the EA community to move away from OP infrastructure wouldn’t require relinquishing as much infrastructure as the labels might suggest.
For example, and this speaks to @Jason’s comment, the Center for Effective Altruism is primarily funded by the OP longtermist team to (as far as I can tell) expand and protect the longtermist ecosystem. It acts and prioritizes accordingly. It is closer to a longtermist talent recruitment agency than a center for effective altruism. EA Globals (impact often measured in connections) are closer to longtermist job career fairs than a global meeting of effective altruists. CEA groups prioritize recruiting people who might apply for and get OP longtermist funding (“highly engaged EAs”).
I think we have a lot of agreement in what we want. I want more community infrastructure to exist, recruiting to be labeled as recruiting, and more people figuring out what they think is right rather than deferring to authorities.
I don’t think any of these need to wait on proving open phil’s power is unjustified. People can just want to do them, and then do them. The cloud of deference might make that harder[1], but I don’t think arguing about the castle from a position of entitlement makes things better. I think it’s more likely to make things worse.
Acting as if every EA has standing to direct open phil’s money reifies two things I’d rather see weakened. First it reinforces open phil’s power, and promotes deference to it (because arguing with someone implies their approval is necessary). But worse, it reinforces the idea that the deciding body is the EA cloud, and not particular people making their own decisions to do particular things[2]. If open phil doesn’t get to make its own choices without community ratification, who does?
I remember reading a post about a graveyard of projects CEA had sniped from other people and then abandoned. I can’t find that post and it’s a serious accusation so I don’t want to make it without evidence, but if it is true, I consider it an extremely serious problem and betrayal of trust.
yes, everyone has standing to object to negative externalities
narrow is meant to be neutral to positive here. No event can be everything to all people, I think it’s great they made an explicit decision on trade-offs. They maybe could have marketed it more accurately. They’re moving that way now and I wish it had gone farther earlier. But I think even perfectly accurate marketing would have left a lot of people unhappy.
Maybe some people argued from a position of entitlement. I skimmed the comments you linked above and I did not see any entitlement. Perhaps you could point out more specifically what you felt was entitled, although a few comments arguing from entitlement would only move me a little so this may not be worth pursuing.
The bigger disagreement I suspect is between what we think the point of EA and the EA community is. You wrote that you want it to be a weird do-ocracy. Would you like to expand on that?
Maybe you two might consider having this discussion using the new Dialogue feature? I’ve really appreciated both of your perspectives and insights on this discussion, and I think the collaborative back-and-forth your having seems a very good fit for how Dialogues work.
That’s helpful.
So in this hypothetical, certain functions transfer to the fee-paying society, and certain functions remain funded by OP. That makes sense, although I think the range of what the fee-paying society can do on fees alone may be relatively small. If we estimate 2,140 full fee-payers at $200 each and 1,428 students at $50 each, that’s south of $500K. You’d need a diverse group of EtGers willing to put up $5K-$25K each for this to work, I suspect. I’m not opposed; in fact, my first main post on the Forum was in part about the need for the community to secure independent funding for certain epistemically critical functions. I just want to see people who advocate for a fee-paying society to bite the bullet of how much revenue fees could generate and what functions could be sustained on that revenue. It sounds like you are willing to do so.
But looping back to your main point about “huge amounts of soft and hard power over the EA community” held by OP, how much would change in this hypothetical? OP still funds the bulk of EA, still pays for the “recruitment funnel,” pays the community builders, and sponsors the conferences. I don’t think characterizing the bulk of what CEA et al. do as a “recruitment funnel” for the longtermist ecosystem renders those functions less important as sources of hard and soft power. OP would still be spending ~ $20-$30MM on meta versus perhaps ~ $1-2MM for the fee-paying society.
OP and most current EA community work takes a “Narrow EA” approach. The theory of change is that OP and EA leaders have neglected ideas and need to recruit elites to enact these ideas. Buying castles and funding expensive recruitment funnels is consistent with this strategy.
I am talking about something closer to a big tent EA approach. One vision could be to help small and medium donors in rich countries spend more money more effectively on philanthropy, with a distinctive emphasis on cause neutrality and cause prioritization. This can and probably should be started in a grassroots fashion with little money. Spending millions on fancy conferences and paying undergraduate community builders might be counter to the spirit and goals of this approach.
A fee-paying society is a natural fit for big tent EA and not for narrow EA.
I didn’t know that the huge amounts of power held by OP was my main point! I was trying to use that to explain why EA community members were so invested in the castle. I’m not sure I succeeded, especially since I agree with @Elizabeth’s points that no one needs to wait for permission from OP or anyone else to pursue what they think is right, and the EA community cannot direct OP’s donations.
I personally would love to see a big-tent organization like the one you describe! I think it less-than-likely that the existence of such an organization would have made most of the people who were “so invested in the castle” significantly less so. But there’s no way to test that. I agree that a big-tent organization would bring in other people—not currently involved in EA—who would be unlikely to care much about the castle.
“Castles”, plural. The purchase of Wytham Abbey gets all the attention, but everyone ignores that during that same time there was also the purchase of a chateau in Hostačov using FTX funding.
I think an underappreciated part of castlegate is that it fairly easily puts people in an impossible bind.
EA is a complicated morass, but there are a few tenets that are prominent, especially early on. These may be further simplified, especially in people using EA as treatment for their scrupulosity issues. For most of this post I’m going to take that simplified point of view (I’ll mark when we return to my own beliefs).
Two major, major tenets brought up very early in EA are:
You should donate your money to the most impactful possible cause
Some people will additionally internalize “The most impactful in expectation”
GiveWell and OpenPhil have very good judgment.
The natural conclusion of which is that donating GiveWell or OpenPhil-certified causes is a safe and easy way to fulfill your moral duty.
If you’re operating under those assumptions and OpenPhil funds something without making their reasoning legible, there are two possibilities:
The opportunity is bad, which at best means OpenPhil is bad, and at worst means the EA ecosystem is trying to fleece you.
The opportunity is good but you’re not allowed to donate to it, which leaves you in violation of tenet #1.
Both of which are upsetting, and neither of which really got addressed by the discourse.
I don’t think these tenets are correct, or at least they aren’t complete. I think goodharting on a simplified “most possible impact” metric leads very bad places. And I think that OpenPhil isn’t even trying to have “good judgment” in the sense that tenet #2 means it. Even if they weren’t composed of fallible humans, they’re executing a hits-based strategy that means you shouldn’t expect every opportunity to be immediately, legibly good. That’s one reason they don’t ask for money from small donors. Which means OpenPhil funding things that aren’t legibly good doesn’t put me in any sort of bind.
I think it would be harmful to force all of EA to fit the constraints imposed by these two tenets. But I think enough people are under the impression it should that it rises to a level of problem worth addressing, probably through better messaging.
Where does the “ you’re not allowed to donate to it” part of #2 come from?
because it’s not legible, and willingness to donate to illegible things opens you up to scams.
OpenPhil also discourages small donations, I believe specifically because they don’t want to have to justify their decisions to the public, but I think will accept them.
Saying you’re not allowed to donate to the projects is much stronger than either of these things though. E.g. re your 2nd point, nothing is stopping someone from giving top up funding to projects/people that have received OpenPhil funding, and I’m not sure anyone feels like they’re being told they shouldn’t? E.g. the Nonlinear Fund was doing exactly this kind of marginal funding.
I agree they’re allowed to seek out frontier donations, or for that matter give to Open Phil. I believe that this doesn’t feel available/acceptable, on an emotional level, to a meaningful portion of the EA population, who have a strong need for both impact and certainty.
Utilitarianism without strong object-level truthseeking be like
(credit: I found on twitter, uncredited)
Original source is here.
A good summary of pop-Bayesianism failure modes. Garbage in is still garbage out, even if you put the garbage through Bayes theorem.
Salaries at direct work orgs are a frequent topic of discussion, but I’ve never seen those conversations make much progress. People tend to talk past each other- they’re reading words differently (“reasonable”), or have different implicit assumptions that change the interpretation. I think the questions below could resolve a lot of the confusion (although not all of it, and not the underlying question. Highlighting different assumptions doesn’t tell you who’s right, it just lets you focus discussions on the actual disagreements).
Here’s my guess for the important questions. Some of them are contingent- e.g. you might think new grad generalists and experienced domain experts should be paid very differently. Feel free to give as many sets of answers as you want, just be clear which answers lump together, so no one misreads your expert salary as if it was for interns.
What kind of position are you thinking about?
Experienced vs. new grad
Domain expertise vs generalist?
Many outside options vs. few?
Founder vs employee?
What salary are you thinking about?
What living conditions do you expect this salary to buy?
Housing?
Location?
Kids?
Food?
Savings rate
What is your bar for “enough” money? Not keeling over dead? Peak productivity but miserable? Luxury international travel 2x/year?
What percentage of people can reach that state with your suggested salary?
Some things that might make someone’s existence more expensive:
health issues (physical and mental)
Kids
Introversion
Ailing parents
Distant family necessitating travel.
Burnout requiring unemployed period.
What do you expect to happen for people who can’t thrive in those conditions?
If you lost your top choice due to insufficient salary, how good do you expect the replacement to be?
What is your counterfactual for the money saved on salary?
People often cite EA salaries as higher than other non-profits, but my understanding is that most non-profits pay pretty badly. Not “badly” as in “low”, but “badly” as in “they expect credentials, hours, and class signals that are literally unaffordable on the salary they pay. The only good employees who stick around for >5 years have their bills paid by a rich spouse or parent.”
So I don’t think that argument in particular holds much water.
Do you have any citations for this claim?
Implict and explicit from https://askamanager.com/ and https://nonprofitaf.com/ (which was much epistemically stronger in its early years)
n = 1 but my wife has worked in non-EA non-profits her whole career, and this is pretty much true. Its mostly women earning poorly at the non-profit, while husbands makes bank at the big corporate.
Where does this idea come from Elizabeth? From my experience (n=10) this argument is incorrect. I know a bunch of people who work in these “badly” paying jobs you talk of who defy your criteria, they don’t have their bills paid for by a rough parent—instead they are content with their work and accept a form of “salary sacrifice” mindset even if they wouldn’t phrase it in those EA terms.
EA doesn’t have a monopoly on altruism, there are plenty of folks out there living simply and working for altruistic causes they believe in even thought it doesn’t pay well and they could be earning way more elsewhere., outside of conventional market forces.
The sense I get reading this is that you feel I’ve insulted your friends, who have made a big sacrifice to do impactful work. That wasn’t my intention and I’m sorry it came across that way. From my perspective, I am respecting the work people do by suggesting they be paid decently.
First, let me take my own advice and specify what I mean by decently: I think people should be able to have kids, have a sub-30 minute commute, live in conditions they don’t find painful (people only live with housemates if they like it, not physically dangerous, outdoor space if they need that to feel good. Any of these may come at at trade off with the others, probably no one gets all of them, but you shouldn’t be starting out from a position where it’s impossible to get reasonable needs met), save for retirement, have cheap vacations, have reasonably priced hobbies, pay their student loans, and maintain their health (meaning both things like healthcare, and things like good food and exercise). If they want to own their home, they shouldn’t be too many years behind their peers in being able to do so.
I think it is both disrespectful to the workers and harmful to the work to say that people don’t deserve these things, or should be willing to sacrifice it for the greater good. Why on earth put the pressure on them to accept less[1], and not on high-earners to give more? This goes double for orgs that require elite degrees or designer clothes: if you want those class signals, pay for them.
There’s an argument here that low payment screens for mission alignment. I think this effect is real, but is insignificant at the level I’ve laid out.
Hey Elizabeth—just to clarify I don’t think you’ve insulted my friends at all don’t worry about that—I just disagreed from my experience at least that was the situation with most NGO workers like you claimed. I get that you are trying to respect people by pushing for them to be paid more it’s all good.
As a small note, I don’t think they have made a “big sacrifice” at all, most wouldn’t say they have made any sacrifice at all. They have traded earning money (which might mean less to them than for other people anyway) for a satisfying job while living a (relatively) simple lifestyle which they believe is healthy for themselves and the planet. Personally I don’t consider this a sacrifice either, just living your best life!
I’m going to leave it here for now (not in a bad way at all) because I suspect our underlying worldviews differ to such a degree here that it may be hard to debate these surface salary and lifestyle issues without first probing at deeper underlying assumptions here about happiness, equality, “deserving” etc., which would take a deeper and longer discussion that might be tricky on a forum back and forth
Not saying I’m not up for discussing these things in general though!
I tested a version of these here, and it worked well. A low-salary advocate revealed a crux they hadn’t before (there is little gap between EA orgs’ first and later choice candidates), and people with relevant data shared it (the gap may be a 50% drop in quality, or not filling the position at all).
This is an interesting model—but what level of analysis do you think is best for answering question 7? One could imagine answering this question on:
the vacancy level at the time of hire decision (I think Bob would be 80% as impactful as the frontrunner, Alice)
the vacancy level at the time of posting (I predict that on average the runner-up candidate will be 80% as the best candidate would be at this org at this point in time)
the position level (similar, but based on all postings for similiar positions, not just this particular vacancy at this point in time)
the occupational field level (e.g., programmer positions in general)
the organizational level (based on all positions at ABC Org; this seems to be implied when an org sets salaries mainly by org-wide algorithm)
the movement-wide level (all EA positions)
the sector-wide level (which could be “all nonprofits,” “all tech-related firms,” etc.)
the economy-wide level.
I can see upsides and downsides to using most of these to set salary. One potential downside is, I think, common to analyses conducted at a less-than-organizational level.
Let’s assume for illustrative purposes that 50% of people should reach the state specified in question 4 with $100K, and that the amount needed is normally distributed with a standard deviation of $20K due to factors described in step five and other factors that make candidates need less money. (The amount needed likely isn’t normally distributed, but one must make sacrifices for a toy model.) Suppose that candidates who cannot reach the question-4 state on the offered salary will decline the position, while candidates who can will accept. (Again, a questionable but simplifying assumption.)
One can calculate, in this simplified model, the percentage of employees who could achieve the state at a specific salary. One can also compute the amount of expected “excess” salary paid (i.e., the amounts that were more than necessary for employees to achieve the desired state).
If the answer to question 7 is that losing the top candidate would have severe impact, one might choose a salary level at which almost all candidates could achieve the question-four state—say, +2.5 SD (i.e., $150K) or even +3 SD ($160K). But this comes at a cost, the employer has likely paid quite a bit of “excess” salary (on average, $50K of the $150K salary will be “excess”).
On the other hand, if there are a number of candidates of almost equivalent quality, it might be rational to set the salary offer at $100K, or even at −0.5 SD ($90K), accepting that the organization will lose a good percent of the candidates as a result.
I suspect you would then have a morale problem with certain employees running the numbers and concluding that they were seen as considerably more replaceable than others who were assigned the same level!
You can fix that by answering question 7 at the organizational or movement levels, averaging the answers for all positions. Suppose that analysis led to the conclusion that your org should offer salaries at this position grade level based on +1 SD ($120K). But you’re still running a 16% risk that the top candidate for the position with no good alternative will decline, while you’re not getting much ROI for the “excess” money spent for certain other positions. You could also just offer $150K to everyone at that level, but that’s harder to justify in the new world of greater funding constraints.
In sum, the mode of analysis that I infer from your questions seems like it would be very helpful when looking at a one-off salary setting exercise, but I’m unsure how well it would scale.
Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.
two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I’m working. It’s like holding a magnet away from its twin; you can do it, but the minute you stop the system will snap back into place.
But more than I am lazy and fearful, I am easily bored, and hate boredom even more than I hate work or failure. If I hang around my comfort zone long enough I get bored of it and naturally start exploring outside. And that expansion doesn’t take energy; in fact it takes energy to keep me in at that point.
My mom used a really simple example of this on my brother when he was homeschooled (6th grade). He’d had some fairly traumatic experiences in English class and was proving resistant to all her teaching methods. Finally she sat him down in front a computer and told him he had to type continuously for X minutes. It could be literally anything he wanted, including “I can’t think of anything to write about”, he just had to keep his fingers typing the entire time (he could already touch type at this point, mom bribed us with video games until we got to 60WPM). I don’t remember exactly how long this took to work, I think it took her a while to realize she had to ban copy/paste but the moment she did my brother got so bored of typing the same thing that he typed new things, and then education could slip in.
So I’m not worried about being stuck, because I will definitely gnaw my own leg off just to feel something if that happens. And it’s unclear if I can speed up the process by pushing myself outside faster, because leaving comfort zone ring n too early delays getting bored of it (although done judiciously it might speed up the boredom).
I’ll be at EAGxVirtual this weekend. My primary goal is to talk about my work on epistemics and truthseeking within EA, and especially get the kind of feedback that doesn’t happen in public. If you’re interested, you can find me on the usual channels.
Sounds cool, thanks for attending!
I’m pretty sure you can’t have consequentialist arguments for deceptions of allies or self, because consequentialism relies on accurate data. If you’ve blinded yourself then you can have the best utility function in the world and it will do you no good because you’re applying it to gibberish.