Experimentation with decentralised funding is good. I feel it’s a real shame that EA may not end up learning very much from the FTX regrant program because all the staff at the foundation quit (for extremely good reasons!) before many of the grants were evaluated.
More engagement with experts. Obviously, this trades off against other things and it’s easier to engage with experts when you have money to pay them for consultations, but I’m sure there are opportunities to engage with them more. I suspect that a lot of the time the limiting factor may simply be people not knowing who to reach out to, so perhaps one way to make progress on this would be to make a list of experts who are willing for people at EA orgs to reach out to them, subject to availability?
I would love to see more engagement from Disaster Risk Reduction, Future Studies, Science and Technology Studies, ect. I would encourage anyone with such experience to consider posting on the EA forum. You may want to consider extracting out this section in a separate forum post for greater visibility.
I would be keen to see experiments where people vote on funding decisions (although I would be surprised if this were the right funding mechanism for the vast majority of funds rather than a supplement).
Where I disagree:
I suspect it would be a mistake for EA to shift too much towards always just adopting the expert consensus. As EAs we need to back ourselves, but without becoming overconfident. If EA’s had just deferred to the consensus of development studies experts, EA wouldn’t have gotten off the ground. If EA’s had just deferred to the most experienced animal advocates, that would have biased us towards the wrong interventions. If EA’s had just deferred to ML researchers, we would have skipped over AI Safety as a cause area.
I don’t think EA is too focused on AI safety. In fact, I suspect that in a few years, we’ll probably feel that we underinvested in it given how fast it’s developing.
I see value-alignment as incredibly important for a movement that actually wants to get things done, rather than being pulled in several different directions. I agree that it comes with significant risks, such as those you’ve identified, however, I think that we just have to trust in our ability to navigate those risks.
I agree that we need to seek critiques beyond what the existing red-teaming competition and cause exploration prizes have produced, although I’m less of a fan of your specific proposals. My ideal proposal would be to get a few teams of smart, young EA’s who already have a strong understanding of why things are the way that they are in EA and give them a grant to spend time thinking about how they would construct the norms and institutions of EA if they were building them from the ground up. Movements tend to advance by having the youth break with tradition, so I would favour accelerating this natural process over the suggestions presented.
While I would love to see EA institutions being able to achieve a broader base of funding, this feels more like something that would be nice to have, rather than something that you should risk disrupting your operations over.
Voting isn’t a panacea. Countries have a natural answer to who gets to vote—every citizen. I can’t see open internet polls as a good idea due to how easily they can be manipulated, so we’d then require a definition of a member. This would require either membership fees or recording attendance at EA events, so there would be a lot of complexity in making this work.
I think I basically agree here, and I think it’s mostly about a balance; criticism should, I think, be seen as pulling in a direction rather than wanting to go all the way to an extreme (although there definitely are people who want that extreme who I strongly disagree with!)
On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities).
I think I agree the post sees voting/epistemic democracy in a too rosy eyed way. On the other hand, I am aware of being told by a philosopher of science I know that xrisk was the most hierarchical field they’d seen. Moreover, I think democracy can come in gradations, and I don’t think ea will ever be perfect.
On your thing of youth, I think that’s interesting. I’m not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism, or having had themselves sidelined by people with more power who disagree, or had the credit for their achievements taken by people more senior making it harder for them to have legitimacy to push for change etc. This is why I like the cultural points this post makes, as it does seem we need a better culture to achieve our ideals
“On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities)”—I agree that this was a mistake.
“I’m not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism”—that’s worrying. Hopefully seeing this post be highly upvoted makes people feel less scared.
Lot of good points here. One slight critique and one suggestion to build on the above. If I seem at all confrontational in tone, please note that this is not my aim—I think you made a solid comment.
Critique: I have a great sense of caution around the belief that “smart, young EAs”, and giving them grants to think about stuff, are the best solution to something, no matter how well they understand the community. In my mind, one of the most powerful messages of the OP is the one regarding a preference for orthodox yet inexperienced people over those with demonstrable experience but little value alignment. Youth breaking from tradition doesn’t seem a promising hope when a very large portion of this community is, and always has been, in their youth. Indeed, EA was built from the ground up by almost the same people in your proposed teams. I’m sure there are smart, young EAs readily available in our labour force to accept these grants, far more readily than people who also deeply understand the community but do not consider themselves EAs (whose takes should be most challenging) or have substantial experience in setting good norms and cultural traits (whose insights will surely be wiser than ours). I worry the availability and/or orthodoxy of the former is making them seem more ideal than the latter.
Suggestion: I absolutely share your concerns about how the EA electorate would be decided upon. As an initial starting point, I would suggest that voting power be given to people who take the Giving What We Can pledge and uphold it for a stated minimum time. It serves the costly signalling function without expecting people to simply buy “membership”. My suggestion has very significant problems, that many will see at first glance, but I share it in case others can find a way to make it work. Edit: It seems others have thought about this a lot more than I have, and it seems intractable.
I don’t see my suggestion of getting a few groups of smart, young, EAS as exclusive with engaging with experts.
Obviously they trade off in terms of funds and organiser effort, but it wouldn’t actually be that expensive to pay the basic living expenses of a few young people.
Where I agree:
Experimentation with decentralised funding is good. I feel it’s a real shame that EA may not end up learning very much from the FTX regrant program because all the staff at the foundation quit (for extremely good reasons!) before many of the grants were evaluated.
More engagement with experts. Obviously, this trades off against other things and it’s easier to engage with experts when you have money to pay them for consultations, but I’m sure there are opportunities to engage with them more. I suspect that a lot of the time the limiting factor may simply be people not knowing who to reach out to, so perhaps one way to make progress on this would be to make a list of experts who are willing for people at EA orgs to reach out to them, subject to availability?
I would love to see more engagement from Disaster Risk Reduction, Future Studies, Science and Technology Studies, ect. I would encourage anyone with such experience to consider posting on the EA forum. You may want to consider extracting out this section in a separate forum post for greater visibility.
I would be keen to see experiments where people vote on funding decisions (although I would be surprised if this were the right funding mechanism for the vast majority of funds rather than a supplement).
Where I disagree:
I suspect it would be a mistake for EA to shift too much towards always just adopting the expert consensus. As EAs we need to back ourselves, but without becoming overconfident. If EA’s had just deferred to the consensus of development studies experts, EA wouldn’t have gotten off the ground. If EA’s had just deferred to the most experienced animal advocates, that would have biased us towards the wrong interventions. If EA’s had just deferred to ML researchers, we would have skipped over AI Safety as a cause area.
I don’t think EA is too focused on AI safety. In fact, I suspect that in a few years, we’ll probably feel that we underinvested in it given how fast it’s developing.
I see value-alignment as incredibly important for a movement that actually wants to get things done, rather than being pulled in several different directions. I agree that it comes with significant risks, such as those you’ve identified, however, I think that we just have to trust in our ability to navigate those risks.
I agree that we need to seek critiques beyond what the existing red-teaming competition and cause exploration prizes have produced, although I’m less of a fan of your specific proposals. My ideal proposal would be to get a few teams of smart, young EA’s who already have a strong understanding of why things are the way that they are in EA and give them a grant to spend time thinking about how they would construct the norms and institutions of EA if they were building them from the ground up. Movements tend to advance by having the youth break with tradition, so I would favour accelerating this natural process over the suggestions presented.
While I would love to see EA institutions being able to achieve a broader base of funding, this feels more like something that would be nice to have, rather than something that you should risk disrupting your operations over.
Voting isn’t a panacea. Countries have a natural answer to who gets to vote—every citizen. I can’t see open internet polls as a good idea due to how easily they can be manipulated, so we’d then require a definition of a member. This would require either membership fees or recording attendance at EA events, so there would be a lot of complexity in making this work.
I think I basically agree here, and I think it’s mostly about a balance; criticism should, I think, be seen as pulling in a direction rather than wanting to go all the way to an extreme (although there definitely are people who want that extreme who I strongly disagree with!) On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities). I think I agree the post sees voting/epistemic democracy in a too rosy eyed way. On the other hand, I am aware of being told by a philosopher of science I know that xrisk was the most hierarchical field they’d seen. Moreover, I think democracy can come in gradations, and I don’t think ea will ever be perfect. On your thing of youth, I think that’s interesting. I’m not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism, or having had themselves sidelined by people with more power who disagree, or had the credit for their achievements taken by people more senior making it harder for them to have legitimacy to push for change etc. This is why I like the cultural points this post makes, as it does seem we need a better culture to achieve our ideals
“On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities)”—I agree that this was a mistake.
“I’m not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism”—that’s worrying. Hopefully seeing this post be highly upvoted makes people feel less scared.
Given that EA Global is already having an application process that does some filtering you likely could use the attendance lists.
Lot of good points here. One slight critique and one suggestion to build on the above. If I seem at all confrontational in tone, please note that this is not my aim—I think you made a solid comment.
Critique: I have a great sense of caution around the belief that “smart, young EAs”, and giving them grants to think about stuff, are the best solution to something, no matter how well they understand the community. In my mind, one of the most powerful messages of the OP is the one regarding a preference for orthodox yet inexperienced people over those with demonstrable experience but little value alignment. Youth breaking from tradition doesn’t seem a promising hope when a very large portion of this community is, and always has been, in their youth. Indeed, EA was built from the ground up by almost the same people in your proposed teams. I’m sure there are smart, young EAs readily available in our labour force to accept these grants, far more readily than people who also deeply understand the community but do not consider themselves EAs (whose takes should be most challenging) or have substantial experience in setting good norms and cultural traits (whose insights will surely be wiser than ours). I worry the availability and/or orthodoxy of the former is making them seem more ideal than the latter.
Suggestion: I absolutely share your concerns about how the EA electorate would be decided upon. As an initial starting point, I would suggest that voting power be given to people who take the Giving What We Can pledge and uphold it for a stated minimum time. It serves the costly signalling function without expecting people to simply buy “membership”. My suggestion has very significant problems, that many will see at first glance, but I share it in case others can find a way to make it work. Edit: It seems others have thought about this a lot more than I have, and it seems intractable.
I don’t see my suggestion of getting a few groups of smart, young, EAS as exclusive with engaging with experts.
Obviously they trade off in terms of funds and organiser effort, but it wouldn’t actually be that expensive to pay the basic living expenses of a few young people.