I had a pretty painful experience where I was in a pretty promising position in my career, already pretty involved in EA, and seeking direct work opportunities as a software developer and entrepreneur. I was rejected from EAG twice in a row while my partner, a newbie who just wanted to attend for fun (which I support!!!) was admitted both times. I definitely felt resentful and jealous in ways that I would say I coped with successfully but wow did it feel like the whole thing was lame and unnecessary.
I felt rejected from EA at large and yeah I do think my life plans have adjusted in response. I know there were many such cases! In the height of my involvement I was a very devoted EA, really believed in giving as much as I could bear (time etc included).
This level of devotion juxtaposed with being turned away from even hanging out with people, it’s quite a shock. I think the high devotion version of my life would be quite fulfilling and beautiful, and I got into EA seeking a community for that, but never found it. EAG admissions is a pretty central example of this mismatch to me.
Agrippa
Relatedly to time, I wish we knew more about how much money is spent on community building. It might be very surprising! (hint hint)
Sorry I did not realize that OP doesn’t solicit donations from non megadonors. I agree this recontextualizes how we should interpret transparency.
Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.
Well this is still confusing to me
in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public
Seems obviously true and in fact a continued premise of your post is that there are key facts absent that could explain or fail to explain one decision or the other. Is this particularly true in crminal justice reform? Compared to IDK orgs like AMF (which are hyper transparent by design) maybe, compared to stuff around AI risk I think not.
My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.
This is like the same thesis as your post, does not actually convey much information (it is what anyone I assume would have already guessed Ozzie thought).
I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
Yeah I mean, no kidding. But it’s called Open Philanthropy. It’s easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this. So when their behavior seems strange in a cause area and the amount of transparency on it is very low, I think this is notable, even if the norm among orgs is to obfuscate internal phenomena. So I don’t rlly endorse any normative takeaway from this point about how orgs usually obfuscate information.
We are currently at around 50 ideas and will hit 100 this summer.
This seems like a great opportunity to sponsor a contest on the forum.
Also, there is an application out there for running polls where users make pairwise comparisons over items in a pool and a ranking is imputed. It’s not necessary for all pairs to be compared, the system scales with a high number of alternatives. I don’t remember what it’s called, it was a research project presented by a group when I was in college. I do think it could be a good way to extract a ranking from a crowd (alternative to upvotes / downvotes and other stuff). If you are super excited about this then I can spend some time at some point trying to hunt it down.
Your approach to exploring solutions is neat. Good luck.
One idea I think I would suggest would be trying to bring personal doomsday solutions to market that actually work super well / upgrading the best-available option somehow.
It cracks me up that this is the first comment you’ve ever gotten posting here, it really is not the norm.
The comment is using what I call “EA rhetoric” which has sort of evolved on the forum over the years, where posts and comments are padded out with words and other devices. To the degree this is intended to evasive, this is further bad as it harms trust. These devices are perfectly visible to outsiders.
I agree that this has evolved on the forum over the years and it is driving me insane. Seems like a total race to the bottom to appear as the most thorough thinker. You’re also right to point out that it is completely visible to outsiders.
It’s interesting that you say that given what is in my eyes a low amount of content in this comment. What is a model or model-extracted part that you liked in this comment?
Decent discussion on Twitter, especially from @MichaelDello
https://twitter.com/brianluidog/status/1534738045483683840
To me the biggest challenge in assessing impact is empirical question of how much any supply increase in meat or meat-like stuff leads to replacement of other meat. But this would apply as well to accepted cause areas of meat replacers and cell culture.
Substitution is unclear. In my experience it’s very clear that scallop is served as a main course protein in contexts where the alternative is clearly fish, or most often shrimp. So insofar that substitution occurs, we’d mainly see substitution of shrimp and fish.
However, it is not clear how much substitution of meat in fact occurs at all as supply increases. People generally seem to like eating meat and meat-like stuff. I don’t know data here but meat consumption is globally on the rise.
https://www.animal-ethics.org/snails-and-bivalves-a-discussion-of-possible-edge-cases-for-sentience/#:~:text=Many%20argue%20that%20because%20bivalves,bivalves%20do%20in%20fact%20swim
I found this discussion interesting. To me it seems like they feel aversion—not sure how that is any different from suffering—so it is just a question of “how much?”.- Jun 12, 2022, 6:24 PM; 7 points) 's comment on New cause area: bivalve aquaculture by (
Why not take it a step further and ask funders if you should buy yourself a laptop?
Are re-granters vetting applicants to the fund (or at least get to see them), or do they just reach out to individuals/projects they’ve come across elsewhere?
I don’t think that their process is so defined. Some of them may solicit applications, I have no idea. In my case, we were writing an application for the main fund, solicited notes from somebody who happened to be a re-granter without us knowing (or at least without me knowing), and he ended up opting to fund it directly.
--Still, grantmakers, including re-granters [...]
No need to restate
--Animal advocates (including outside EA) have been trying lots of things with little success and a few types of things with substantial success, so the track record for a type of intervention can be used as a pretty strong prior.
It’s definitely true that in a pre-paradigmatic context vetting is at its least valuable. Animal welfare does seem a bit pre-paradigmatic to me as well, relative to for example global health. But not as much as longtermism.--
concretely:
It seems relevant whether regranters would echo your advice, as applied to highly engaged EA aware of a great-seeming opportunity to disburse a small amount of funds (for example, a laptop’s worth of funds). I highly doubt that they would. This post by Linch https://forum.effectivealtruism.org/posts/vPMo5dRrgubTQGj9g/some-unfun-lessons-i-learned-as-a-junior-grantmaker does not strike me as writing by somebody who would like to be asked to micro manage <20k sums of money more than status quo.
I appreciate the praise! Very cool.
I don’t agree with your analysis of the comment chain.
(and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about.
These assertions / assumptions aren’t true. He didn’t limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he’s done so, given that animal welfare is Sapphire’s dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I’m not sure how this reading is supported.
I thought you ignored this reasonable explanation
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it’s in a reply setting and superficially resembles conversation.
Wdym by “do they get to see the applicants”? (for context I am a regrant recipient) The future fund does one final review and possible veto over the grant, but I was told this was just to veto any major reputational risks / did not really involve effectiveness evaluation. My regranter did not seem to think its a major filter and I’d be surprised to learn that this veto has ever been exercised (or that it had been in a years time).
--Still, the re-granters are grantmakers, and they’ve been vetted. They’re probably much better informed than the average EA.
I mean, you made pretty specific arguments about the information theory of centralized grants. Once you break up across even 20 regranters, these effects you are arguing for—the effects of also knowing about all the other applications—become turbo diminished.
As far as I can tell none of your arguments are especially targeted at the average EA at all. You and sapphire are both personally much better informed than the average EA.
Yes, but I expect funders/evaluators to be more informed about which undercover investigators would be best to fund, since I won’t personally have the time or interest to look into their particular average cost-effectiveness, room for more funding, track record, etc., on my own,
Since we are talking about funding people within your network that you personally know, not randos, the idea is that you already know this stuff about some set of people. Like, explicitly, the case for self-funding norms is the case for utilizing informational capital that already exists rather than discarding it.
Knowing that one opportunity is really good doesn’t mean there aren’t far better ones doing similar work.
I think it is not that hard to keep up with what last year’s best opportunities looked like and get a good sense of where the bar will be this year. Compiling the top 5 opportunities or whatever is a lot more labor intensive than reviewing the top 5 and you already state being informed enough to know about and agree with the decisions of funders. So I disagree with level at which we should think we are flying blind.
If the disagreement comes down to a normative or decision-theoretic one
Yes I think this will be the most common source of disagreement at least in your case, my case, sapphire’s case. With respect to the things I know about being rejected this was the case.
All of that said I think I have updated from your posts to be more encouraging of applying for EA funding and/or making forum posts. I will not do this in a deferrential manner and to me it seems harmful to do so—I think people should feel discouraged if you explicitly discard what you personally know about their competence etc.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less “leftist micro activism”? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association.
--I agree that the concerns around “dilution” are evidence of the phenomenon you are discussing.
It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.
--
TLDR:Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
Yes, this seems to follow the format of your entire thesis
Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X
At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
I think this discussion would have to be several layers less removed from the object level in order to contain insight.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)
You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed.
--
I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?)
Thanks for the stuff about RP, that is not as bad as I had thought.
I donated to RP in late 2019/early 2020 (my biggest donation so far), work there now and think they should continue to scale (at least for animal welfare, I don’t see or pay a lot of attention to what’s going on in the other cause areas, so won’t comment on them)
If you are aware of a major instance where your judgement differed from that of funds, why advocate such strong priors about the efficacy of funds?
I think their undercover investigations are plausibly very good and rescuing clearly injured or sick animals from factory farms (not small farms) is good in expectation, but we can fund those elsewhere without the riskier stuff.
I agree the investigations seem really good / plausibly highest impact (and should be important even just to EAs who want to assess priorities, much less for the sake of public awareness). And you can fund them elsewhere / fund individuals to do this—yourself! Not via funds.
A) Does this represent a change from previous years? Previous comms have gestured at a desire to get a certain mixture of credentials, including beginners. This is also consistent with private comms and my personal experience.
B) Its pretty surprising that Austin, a current founder of a startup that received 1M in EA related funding from FTX regrants, would be below that bar!
Maybe you are saying that there is a bar above which you will get in, but below which you may or may not get in.
I think lack of clarity and mixed signals around this stuff might contribute unnecessarily to hurt feelings.