Thanks Larks. Agree, both of those ideas are already in the template
Sanjay
This seems superficially like a great idea, but I think it works better for, say, the centre for effective aid policy (if it still existed).
It’s easier to decide which things to prioritise if you’ve gone through the things that UK aid actually does and worked out which are better and which are less good.
Your ask will be more effective if you have a good handle on which deprioritisation are a no-go politically (eg are you suggesting deprioritising work in Gaza or Ukraine? Would the politics of that work? Does any of your suggestions bump up against anything the politicians have said publicly?)
You’re more likely to be effective if you can access the right channels. Simply emailing your MP is a very indirect way of getting through to the right people.
All of the issues look surmountable to me if you’re deciding time to this. I don’t think I can do a decent job of this in my spare time. Especially since the window is very tight—these decisions will be made quickly, I suspect.
But if you think you can, please do so and share your thinking with the rest of us :-)
I haven’t thought about this in great depth, so I’m very open to the possibility that this topic should be deprioritised. I haven’t understood your rationale, so I hope you don’t mind if I probe further.
Firstly, a lot of the concerns expressed here I think are extremely unlikely. I do not think there is any serious risk that Trump will send the military after, or otherwise seriously harass, former government employees.
I guess I’d be somewhat interested to know why serious harassment is so unlikely. The sources that I cited seemed to be quite worrying to me on this front.
The Guardian reported the following: “Trump’s escalating threats to pervert the criminal justice system need to be taken seriously,” said the former justice department inspector general Michael Bromwich. “We have never had a presidential candidate state as one of his central goals mobilizing the levers of justice to punish enemies and reward friends. No one has ever been brazen enough to campaign on an agenda of retribution and retaliation.” And NPR reported that “Trump has issued more than 100 threats to investigate, prosecute, imprison or otherwise punish his perceived opponents”.
Having said that, the point I was making relied less on whether Trump would actually seriously harass people, but rather whether they would fear that Trump would do so, and specifically fear this enough that they would avoid taking actions which might act as a check/balance on presidential power. Do you believe that people don’t have this fear?
Some of the other things you fear I don’t necessarily see as bad. As a matter of democratic accountability, by which I mean accountability to the people rather than checks and balances or “good” governance, I do think the president has the right to fire executive branch employees, whether or not we like the particular decisions he makes.
I’m not sure I follow. Which are things which I fear, but which you don’t see as necessarily bad? When I first read this, I thought you were referring to my list of things I fear:
Evisceration of aid becoming permanent
Increased risk of conflict, potentially moving beyond the likes of Greenland and escalating to great power conflict
Increased risk of (accidental or deliberate) use of nuclear weapons. (Apparently the administration fired over 300 employees at the national nuclear security administration, then tried to reinstate them, but at time of writing doesn’t seem to know how; sources: 1,2,3)
Exacerbation of climate change
An unwillingness to follow international norms may lead to greater willingness to develop biological weapons
If tech billionaire “oligarchs” prefer greater deregulation of AI, this could exacerbate the risk of loss of control of AI/misalignment
The human rights abuses typical of a totalitarian state
I’m assuming you do consider all of these to be bad.
When you spoke about the right to fire executive branch employees, were you referring to my concerns about the erosion of democratic institutions? In that section, I observed that:
Trump wants to fire the director of the Office of Government Ethics (OGE) which monitors conflicts of interests. (Source: MSNBC)
Trump fired 17 inspectors general, whose role is to audit the actions of government.
I’m perfectly willing to believe he has that right, but my question is more about whether it leads to better outcomes. Will government make better decisions without the OGE monitoring conflicts of interest? Will government make better decisions if the inspectors general are loyalists? (assuming that’s what they are.) I imagine this leads to worse outcomes, but if you are more sanguine I’d be interested to know why.
I do think it is good that people are filing lawsuits challenging the questionably legal things Trump is doing. I don’t think that this intervention is particularly neglected.
I had the intuition that there was probably a lot of work that could be done here, but that the firehose of actions meant that it was hard for people to spare the attention on any of them. This gave me the impression that while lawsuits were happening, there’s probably lots more that can be done. Not least because lawsuits are often expensive, and could peter out or become ineffective because of lack of funds. This is pretty impressionistic though, so if you have a more carefully researched opinion, I’d be interested.
It does sound sort of interesting, but I don’t think I have a clear picture of the theory of change. How does the dashboard lead to better outcomes? If the theory of change depends on certain key people (media? Civil servants? Someone else?) making use of the dashboard, would it make sense to check with those people and see if they would find it useful? Should we check if they’re willing to be involved in the creation process to provide the feedback which helps ensure it’s worth their while to use it?
Shortly after I wrote this, the news reported nationwide protests on topics pretty aligned you what in talking about here. This might mean that my assessment of neglectedness should be updated
I have now reviewed and edited the relevant section.
My feeling when I drafted it was as per Ozzie’s comment—as long as I was transparent, I thought it was OK for readers to judge the quality of the content as they see fit.
Part of my rationale for this being OK was that it was right at the end of a 15-page write-up. Larks wrote that many people will read this post. I hope that’s true, but I didn’t expect that many people would read the very last bits of the appendix. The fact that someone noticed this at all, let alone almost immediately after this post was published, was an update for me.
Hence my decision to review and edit that section at the end of the document, and remove the disclaimer.
You wrote:
Consider these types of questions that AI systems might help address:
What strategic missteps is Microsoft making in terms of maximizing market value?
What metrics could better evaluate the competence of business and political leaders?
Which public companies would be best off by firing their CEOs?
<...>
I’m open to the possibility that a future AI may well be able to answer these questions more quickly and more effectively than the typical human who currently handles those questions.
The tricky thing is how to test this.
Given that these are not easily testable things, I think it might be hard for people to gain enough confidence in the AI to actually use it. (I guess that too might be surmountable, but it’s not immediately obvious to me how)
Can you give an indication of how common the problem is? (ie how often do papers get lost/deleted?) My intuition says not very often, and when it does happen it’s most likely to be the least useful papers, but I could believe my intuition is wrong.
I don’t think bringing the ISS down in a controlled way is because of the risk that it might hit someone on earth, or because of “the PR disaster” of us “irrationally worrying more about the ISS hitting our home than we are getting in their car the next day”.
Space debris is a potentially material issue.
There are around 23,000 objects larger than 10 cm (4 inches) and about 100 million pieces of debris larger than 1 mm (0.04 inches). Tiny pieces of junk might not seem like a big issue, but that debris is moving at 15,000 mph (24,140 kph), 10 times faster than a bullet. (Source: PBS)
This matters because debris threatens satellites. Satellites are critical to GPS systems and international communication networks. They are used for things like helping you get a delivery, helping the emergency services get to their destination, or military operations.
Any one bit of space debris probably won’t cause a big deal if you ignore knock-on effects. However a phenomenon called Kessler Syndrome could make things much worse. This arises when space debris hits into satellites, causing more space debris, causing a vicious circle.
The geopolitics of space debris gets complicated.
The more space debris there is, the more legitimate it is to have weapons on a satellite (to keep your satellite safe from debris).
However such weapons could be dual-purpose, since attacking an enemy’s satellite could be of great tactical value in a conflict scenario.
I haven’t done a cost-effectiveness analysis to justify whether $1bn is a good use of that money, but I think it’s more valuable than this article seems to suggest.
A donor-pays philanthropy-advice-first model solves several of these problems.
If your model focuses primarily on providing advice to donors, your scope is “anything which is relevant to donating”, which is broad enough that you’re bound to have lots of high-impact research to do, which helps with constraint 1.
Strategising and prioritisation are much easier when you’re knee-deep in supporting donors with their donations—this highlights the pain points in making good giving decisions, which helps with constraint 2.
If donors perceive that the research is worth funding, and have potentially had input into the ideation of the research project, they are likely to be willing to fund it, which helps with constraint 6.
This explains why SoGive adopted this model.
Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.
> Would such a game “positively influence the long-term trajectory of civilization,” as described by the Long-Term Future Fund? For context, Rob Miles’s videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?This struck me as strawmanning.
The original post asked whether the game would positively influence the long-term trajectory of civilisation. It didn’t spell it out, but presumably we want that to be a material positive influence, not a trivial rounding error—i.e. we care about how much positive influence.
The extent of that positive influence is lowered when we already have existing clear and popular explanations. Hence I do believe the existence of the videos is relevant context.
Your interpretation “It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?” is a much stronger and more attackable claim than my read of the original.
> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children’s lives or provide cataract surgery to around 4000 people?
These are totally different modes of impact. I assume you could make this argument for any speculative work.I’m more sympathetic to this, but I still didn’t find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept “funds have an opportunity cost” (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn’t a helpful update for me.
On the other hand, I appreciated this comment, which I thought to be valuable:
I also like grant evaluation, but I would flag that it’s expensive, and often, funders don’t seem very interested in spending much money on it.
Donors contribute to these funds expecting rigorous analysis comparable to GiveWell’s standards, even for more speculative areas that rely on hypotheticals, hoping their money is not wasted, so they entrust that responsibility to EA fund managers, whom they assume make better and more informed decisions with their contributions.
I think it’s important that the author had this expectation. Many people initially got excited about EA because of the careful, thoughtful analysis of GiveWell. Those who are not deep in the community might reasonably see the branding “EA Funds” and have exactly the expectations set out in this quote.
I’m working from brief conversations with the relevant experts, rather than having conducted in-depth research on this topic. My understanding is:
the food security angle is most useful for a country which imports a significant amounts of its food; where this is true, the whole argument is premised on the idea that domestic food producers will be preserved and strengthened, so it doesn’t naturally invite opposition.
the economy / job creation angle is again couched in terms of “increasing the size of the pie”—i.e. adding more jobs to the domestic economy and not taking away from the existing work. Again, this doesn’t seem to naturally invite opposition from incumbent food producers.
I guess in either case it’s possible for the food/agriculture lobby to nonetheless recognise that alt proteins could be a threat to them and object. I don’t know how common it is for this actually happen.
When advocating that governments invest more in alt proteins, the following angles are typically used:
climate/environmental
bioeconomy (i.e. if you invest in this, it will create more jobs in your country)
food security
I understand the latter two are generally popular with right-wing governments; either of these two positions can be advanced without referencing climate at all (which may be preferable in some cases for the reasons Ben outlines)
I can confirm that there exists at least NGO who has this type of risk on their radar. I don’t want to say too much until we have gone through the appropriate processes for publishing our notes from speaking with them.
If any donors want to know more, feel free to reach out directly and I can tell you more.
An application I was expecting you to mention was longer term forecasts. E.g. if there was a market about, say, something in 2050, for example, the incentives for forecasters are perhaps less good, because the time until resolution is so long. But a “chained” forecast capturing something like “what will next year’s forecast say” (and next year’s forecast is about the following year’s forecast, and so until you hit 2050, when it resolves to the ground truth).
This assumes that forecasters are less effective when it comes to markets which don’t resolve for a long time.
In 2020, we at SoGive were excited about funding nuclear work for similar reasons. We thought that the departure of the MacArthur foundation might have destructive effects which could potentially be countered with an injection of fresh philanthropy.
We spoke to several relevant experts. Several of these were with (unsurprisingly) philanthropically funded organisations tackling the risks of nuclear weapons. Also unsurprisingly, they tended to agree that donors could have a great opportunity to do good by stepping in to fill gaps left by MacArthur.
There was a minority view that this was not as good an idea as it seemed. This counterargument was MacArthur had left for (arguably) good reasons. Namely that after throwing a lot of good money after bad, they had not seen strong enough impact for the money invested. I understood these comments to be the perspectives of commentators external to MacArthur (i.e. I don’t think anyone was saying that MacArthur themselves believed this, and we didn’t try to work out whether MacArthur themselves believed this).
Under this line of thinking, some “creative destruction” might be a positive. On the one hand, we risk losing some valuable institutional momentum, and perhaps some talented people. On the other hand, it allows for fresh ideas and approaches.
Thanks Larks, I definitely agree with your characterisation of Kevin Esvelt as the bio guy. An error crept into our notes but is now corrected.
Could someone please explain how much extra value this adds given that we already have the Cambridge declaration?
Thanks for the question. Happy to set out how I think about this, but note that I haven’t researched this deeply, and for several parts of this argument, I could imagine myself changing my mind with a bit more research.
Firstly, we’re not considering the aid spend in isolation. Rather, the impact of our actions may be to redirect spend from one usage to another, so we’re comparing to some counterfactual spend, which is typically likely to be some sort of spend which leads to (probably) some sort of economic activity in a developed economy.
Secondly, I think it’s useful to consider three levels of impact
Part of the reason why I consider the meat-eater problem to be only a “moderate negative” (as per “second order” row) is because I’m inclined to believe it’s not always bad for animals. If the aid targets the poorest of the poor (which doesn’t always happen) these are likely to be rural poor, who live in areas where land is cheap, and animals have lots of space to peck around, graze, and seem, from what I’ve seen, to have a nice time (source: hanging around in poor parts of sub-Saharan Africa, not that I’m an expert at judging animal welfare just from looking at an animal, so my judgement may be off). These animal lives appear (to me) to be net positive. On the other hand, I do expect the effect of aid will be to accelerate the rate at which people become middle class. This is more likely to lead to consumption of factory farmed animals, which is a negative.
The third order effects are much more speculative. To what extent does greater economic development spur moral circle expansion? There’s lots to say on this, and I don’t want to lengthen this comment further.
To my mind, the second order effects are very speculative, and the third order effects even more so. But they are potentially more important in the long term.
Putting together all the second order and third order considerations, I don’t think it’s clear which outcome leads to be better outcomes for animals, so I’m inclined to treat the effects on animals as a neutral factor.
If I spent more time looking into this, I may still change my mind.