Note the many elements of leftist ideology (cooperatives, “mutual aid”, small-is-beautiful-ism, etc.) that appear in this comment alleging that these groups have no common ideology.
Guive
This is an article about moral philosophy, not the internal dynamics of the EA community, and it therefore does not belong on the “community” tab.
Yeah, my understanding is there is debate about whether the loss in EV from having an emergency fund in low yield low risk assets is offset by the benefits of reduced risk. The answer will depend on personal risk tolerance, current net worth, expected career volatility, etc. The main point of my comment was just that a lot of people use default low yield savings accounts even though there’s no reason to do that at all.
*Barnett
That’s a fair point, but a lot of the scenarios you describe would mean rapid economic growth and equities going up like crazy. The expectation of my net worth in 40 years on my actual views is way, way higher than it would be if I thought AI was totally fake and the world would look basically the same in 2065. That doesn’t mean you shouldn’t save up though (higher yields are actually a reason to save, not a reason to refrain from saving).
Thanks for this, Trevor.
For what it’s worth: a lot of people think emergency fund means cash in a normal savings account, but this is not a good approach. Instead, buy bonds or money market funds with your emergency savings, or put them in a specialized high yield savings account (which to repeat is likely NOT a savings account that you get by default from your bank).
Or just put the money in equities in a liquid brokerage account.
Can you be a bit more specific about what it means for the EA community to deny Matthew (and Mechanize) implicit support, and which ways of doing this you would find reasonable vs. unreasonable?
In the case at hand, Matthew would have had to at some point represent himself as supporting slowing down or stopping AI progress. For at least the past 2.5 years, he has been arguing against doing that in extreme depth on the public internet. So I don’t really see how you can interpret him starting a company that aims to speed up AI as inconsistent with his publicly stated views, which seems like a necessary condition for him to be a “traitor”. If Matthew had previously claimed to be a pause AI guy, then I think it would be more reasonable for other adherents of that view to call him a “traitor.” I don’t think that’s raising the definitional bar so high that no will ever meet it—it seems like a very basic standard.
I have no idea how to interpret “sellout” in this context, as I have mostly heard that term used for such situations as rappers making washing machine commercials. Insofar as I am familiar with that word, it seems obviously inapplicable.
From an antirealist perspective, at least on the ‘idealizing subjectivism’ form of antirealism, moral uncertainty can be understood as uncertainty about the result of an idealization process. Under this view, there exists some function that takes your current, naive values as input and produces idealized values as output—and your moral uncertainty is uncertainty about the output.
Thanks for writing this. Do you have any thoughts on how to square giving AI rights with the nature of ML training and the need to perform experiments of various kinds on AIs?
For example, many people have recently compared fine-tuning AIs to have certain goals or engage in certain behaviors to brainwashing. If it were possible to grab human subjects off the street and rewrite their brains with RLHF, that would definitely be a violation of their rights. But what is the alternative—only deploying base models? And are we so sure that pre-training doesn’t violate AI rights? A human version of the “model deletion” experiment would be something out of a horror movie. But I still think we should seriously consider doing that to AIs.
I agree that it seems like there are pretty strong moral and prudential arguments for giving AIs rights, but I don’t have a good answer to the above question.
Does PPE not work or is the issue that people don’t use it?
What specific confirmatory evidence are you thinking of?
Scarce relative to the current level or just < 10x the current level?
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn’t it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don’t like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn’t matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying “X is lexicographically preferable to Y but Y has positive value”, and “Y has no value”?
Well, they could have. A lot of things are logically possible. Unless there is some direct evidence that he was motivated by EA principles, I don’t think we should worry too much about that possibility.
(1) I also heard this, (2) I’m pretty sure his name is spelled “Kuhn” not “Khun”.
This is not necessarily an insurmountable obstacle. If someone wants to make a statement anonymously on a podcast they can write it out and have someone else write it.
Yeah. The words “estimates” and “about” are right there in the quote. There is no pretension of certainty here, unless you think mere use of numbers amounts to pretended certainty.
But what is decision relevant is the expected value. So by best estimate do they mean expected value, or maximum likelihood estimate, or something else? To my ear, “best estimate” sounds like it means the estimate most likely to be right, and not the mean of the probability distribution. For instance, take the (B) option in “Why it can be OK to predictably lose”, where you have a 1% chance of saving 1000 people, and a 99% chance of saving no one, and the choice is non-repeatable. I would think the “best estimate” of the effectiveness of option (B) is that you will save 0 lives. But what matters for decision making is the expected value which is 10 lives.
Sorry if this is a stupid question, I’m not very familiar with GiveWell.
Richard Chappell on consequentialism, theories of well-being, and reactive vs goal-directed ethics.
Ege Erdil on AI forecasting, economics, and quantitative history.
Chad Jones on AI, risk tolerance, and growth.
Phil Trammell on growth economics (the part of his work more directly focused on philanthropy was covered in his previous appearance).
Steven Pinker (there are a lot of things that he has written that are relevant to one aspect or another of EA).Amanda Askell on AI fine-tuning, AI moral status, and AIs expressing moral and philosophical views (she talks some about this in a video Anthropic put out).
Pablo Stafforini on the history of EA and translations of EA content.
Finally, I think it would be good to have an episode with a historian of the world wars, similar to the episode with Christopher Brown. Anthony Beevor or Stephen Kotkin, maybe.
The idea of mutual aid comes from anarcho-communist philosopher Peter Kropotkin.
I also don’t think it is accurate that peasant farming is more productive per hectare than capital intensive large scale farms.