Maybe I’m misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).
But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don’t think we can realistically make traction on existential risk because they haven’t seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity—because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won’t assume that the longtermist approach is the only one that’ll actually work.
Sorry, I think you might have actually misunderstood my point. I was talking about spending money on people working on global poverty vs. people working on longtermism, rather than spending money on global poverty vs longtermism.
My point is that if you invest a lot of money in people working on global poverty, the question that arises is why aren’t you spending it on global poverty, while it’s hard to spend money on longtermism without spending it on people. In any case, people are more accepting of ai researchers bring paid large sums.
That makes sense though I feel like this still applies. It’s still not great optics to pay lots of money to people working on global poverty, but it’s far from unheard of and, if there’s concrete evidence that those people are having an impact then I think a lot of people would consider it justified.
I think the reason it’s acceptable for AI researchers to bring in large sums of money is more because of the market rate for their skillset and less because of the cause directly. I think if someone were paid a high salary to build complex software that solved poverty (if such a thing existed) I would guess that that would be viewed roughly equally. On the other hand if you pay longtermist and/or global poverty community-builders lots of money, this looks much worse.
Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren’t clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.
Longtermists are right to not be too worried about spending money. There’s astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn’t matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn’t matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.
That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.
This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.
Maybe I’m misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).
But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don’t think we can realistically make traction on existential risk because they haven’t seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity—because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won’t assume that the longtermist approach is the only one that’ll actually work.
Sorry, I think you might have actually misunderstood my point. I was talking about spending money on people working on global poverty vs. people working on longtermism, rather than spending money on global poverty vs longtermism.
My point is that if you invest a lot of money in people working on global poverty, the question that arises is why aren’t you spending it on global poverty, while it’s hard to spend money on longtermism without spending it on people. In any case, people are more accepting of ai researchers bring paid large sums.
That makes sense though I feel like this still applies. It’s still not great optics to pay lots of money to people working on global poverty, but it’s far from unheard of and, if there’s concrete evidence that those people are having an impact then I think a lot of people would consider it justified.
I think the reason it’s acceptable for AI researchers to bring in large sums of money is more because of the market rate for their skillset and less because of the cause directly. I think if someone were paid a high salary to build complex software that solved poverty (if such a thing existed) I would guess that that would be viewed roughly equally. On the other hand if you pay longtermist and/or global poverty community-builders lots of money, this looks much worse.
Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren’t clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.
Longtermists are right to not be too worried about spending money. There’s astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn’t matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn’t matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.
That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.
This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.