If you want to understand what expected behavior looks like in these sorts of situations, I would suggest you consider taking a course in journalistic ethics. The industry’s poor reputation for truth seeking is deserved; but there are standards for when and how to seek comment that would serve you well in this context.
Ian Turner
I think it is basically erroneous to say that EA has “refused to engage in the political”.
Farm animal welfare programs are inherently political
Much of the EA safety advocacy is inherently political
Open Philanthropy specifically has had a program area since 2022 to influence global aid policy, which is obviously and directly political.
If you’re not proposing electioneering, what exactly is the program that you are suggesting could have prevented these USAID cuts? Because from where I’m sitting, I don’t really think there was anything EA could have done to prevent that, even if the whole weight of the movement were dedicated to that one thing.
Let’s imagine I have a proposal or a white paper. How and where can I submit it for evaluation?
This forum might not be a bad place to start?
Probably a reference to this study. https://thefilter.blogs.com/thefilter/2009/12/the-israeli-childcare-experiment.html
It’s odd to me that people say they “heard about EA” at EA Global. How’d they hear about EA Global, then? 🤔
Thanks for sharing this. It was interesting to read.
I wonder if you wouldn’t mind sharing the rubric for EA involvement. What constitutes a highly engaged EA?
If your idea is that in-country employees/contractors of organizations like GiveDirectly, Fistula Foundation, AMF, MC, Living Goods, etc., should be invited to EA Global — I agree, and I think these folks often have useful information to add to the conversation. Though I don’t assume everyone in these orgs is a good fit, many are and it’s worth having those voices. Some have an uncritical mindset, basically just doing what they’re told, while others are a little bit too sharp-elbowed and are just looking at what can get funders’ attention without caring how good it actually is.
On the other hand, if your idea is to (for example) invite some folks from villages where GiveDirectly is operating, I pretty strongly feel that this would be a waste of resources. We can get a much better perspective from this group by surveying (and indeed GiveWell and GiveDirectly have sponsored such surveys). If you were to just choose randomly, I think most of those chosen wouldn’t be in a good position to contribute to discussions; and if you were to choose village elites, then you end up with a systematic bias to elite interests, which has been a serious systematic problem in trying to make bottom-up charitable interventions work.
Another one you missed is that the world is getting better over time, so we should expect donation opportunities in the future to be worse.
Ian Turner’s Quick takes
Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?
If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.
I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.
I think the amount of news that is helpful and healthy to consume depends a lot on what it is that you’re trying to do. So maybe good place to start is thinking about how sensitive your work is to developments, and go from there. Channel Duncan Sabien and ask, “what am I doing, and why am I doing it?”.
And if you are going to spend a lot of time with the news, read Zvi’s piece on bounded distrust and maybe also the linked piece from Scott Alexander.
Personally, I view participation in the charitable projects in my community (including donating to church or to a colleague’s pledge drive) as part of my consumption basket and totally unrelated to altruistic work. Relationships are incredibly important to one’s life satisfaction and participating in the community is a part of that.
I did not click Disagree; but I will say that I’m not sure I agree that “The people we are aiming to help should be well within the conversation”. I don’t mean to say that we should ignore their perspectives, values, or opinions, but I don’t think having them attend EA Global is a useful way to achieve that. I’ve had a lot of interesting conversation with GiveDirectly and AMF beneficiaries, but I also think that the median beneficiary would not have much to contribute at EA Global, and if you choose exceptional beneficiaries to represent the class of beneficiaries as a whole, that leads to a different set of problems.
EA is funding some of that stuff, e.g., The Center for Election Science.
It’s not even clear to me that EA trying to change the election would be positive EV. Look at what’s happened with AI.
I would suggest thinking about it this way: Do I need to know what Gary Kasparov’s winning move would be, in order to know that he would beat me at chess? The answer is “no”, he would definitely win, even if I can’t predict exactly how.
As I wrote a couple of years ago, are you able to use your imagination to think of ways that a well-resourced and motivated group of humans could cause human extinction? If so, is there a reason to think that an AI wouldn’t be able to execute the same plan?
I would welcome a blog post about RCTs, and if you decide to write one, I hope you consider the perspective below.
As far as I can tell ~0% of nonprofits are interested in rigorously studying their programs in any way, RCTs or otherwise, and I can’t help but suspect that this is largely because mostly when we do run RCTs we find that these cherished programs have ~no effect. It’s not at all surprising to me that most charities that conduct RCTs feel pressured to do so by donors; but on the other hand basically all charity activities ultimately flow from donor preferences, because donors are the ones with most of the power.
Living Goods is one interesting example, where they ran an RCT because a donor demanded it, got an unexpected (positive) result, and basically pivoted the whole charity based on that. I view that as a success story.
I am certainly not claiming that RCTs are appropriate for all kinds of programs, or some kind of silver bullet. It’s more like, if you ask charities “would you like more or less accountability for results”, the answer is almost always going to be, “less, thanks”.
I don’t understand how you think these legal mechanisms would actually serve to bind superintelligent AIs. Or to put it another way, could chimpanzees or dolphins have established a legal mechanism that would have prevented human incursion into their habitat? If not, how is this hypothetical situation different?
Regarding the idea of trade — doesn’t this basically assume that humans will get a return on capital that is at least as good as the AIs’ return on capital? If not, wouldn’t the AIs eventually end up owning all the capital? And wouldn’t we expect superintelligent AIs to be better than humans at managing capital?
I agree that it aged well in terms of the expected effects of certain electoral outcomes, but the way I see it, that is different from claiming that electoral interventions would be cost-effective (even in retrospect). There was so much money and effort put into the election, it’s not at all clear to me that EA would have been able to make a difference, even with the full weight of the movement dedicated to it.
Just so I understand you correctly, is your claim that if the EA movement had in 2016 spent resources advocating for sortition or electoral system changes, that we would not now be seeing cuts to USAID?
I’m asking because you started this thread with “These sorts of cuts highlight IMO the incorrect strategy EA has been on.” and finished with an article advocating sortition and an article advocating policies like approval voting (which EA already funds).