Setting aside the substantive issues about how accurate this post is vs. the other one, I’ll admit I’m very uncertain on how much we should avoid talking about partisan politics in AI forums, how much it politicizes the debate vs. clarifies the stakes in ways that help us act more strategically
Gil
Re: extremely toxic, most people who would see this post are left-wing, that is obvious.
I don’t think that a word-for-word identical where the author self-identified as an EA would be good. I think it would be less bad, and I might not clamor for the title to be changed.
The problem is that this post blew up on Twitter and a lot of people’s image of EA was downgraded because of it. To me, that’s very unfair; this post is wrong on the substance, this is an extremely unpopular opinion within EA, and the author doesn’t even identify as an EA so the post does not provide any evidence that people who identify as EA think this way. Changing the title would alleviate most of the reputational damage to EA (or well it would have if it was done earlier) and does not seem too big an ask.
IMO it’s pretty outrageous to make a piece entitled “The EA case for [X]” when you yourself do not call yourself identify as an effective altruist and the [X] in question is extremely toxic to most everyone on the outside. It’s like if I made a piece “the feminist case for Benito Mussolini” where I made clear that I am not a feminist but feminists should be supporting Mussolini.
Could you please make the title “My case for Trump 2024” or even just “The case for Trump 2024″? It would be a more accurate description of this piece, and you are hurting EA’s reputation a bit with the current title.
I think it’s worth noting that the two examples you point to are right-wing, which the vast majority of Silicon Valley is not. Right-wing tech ppl likely have higher influence in DC, so that’s not to say they’re irrelevant, but I don’t think they are representative of silicon valley as a whole
I do want to make the point that how tied to EA you are isn’t really your choice. The reason it’s really easy for media outlets to tie EA to scientific racism is that there’s a lot of interaction with scientific racists and nobody from the outside really cares if events like this explicitly market themselves as EA events or not. Strong free speech norms enabling scientific racism have always been a source of tension for this community, and you can’t just get around that by not calling yourselves EA.
Ok. Sorry about the tone of the last response, that came off more rude than I would have liked. I do find it unsettling or norm-breaking to withhold information like this, but I guess you have to do what they allow you to do. I remain skeptical.
This number is crazy low. It seems bad to make a Cause Area post on the forum that entirely rests on implausibly low numbers taken from some proprietary data that can’t be shared. You should at least share where you got this data and why we should believe it.
Regulation, probably, mostly
The main questions in my mind are the extent to which public opinion (in the tech sphere and beyond) will swing against OpenAI in the midst of all this, and the extent to which it will matter. There’s potential for real headway here—public opinion can be strong.
Love a good cost-effectiveness calculation.
Has anyone done a calculation of the (wild) animal welfare effects of climate change? Or is this so ungodly intractable that no one has dared attempt it.
Trump is anti-tackling pandemics except insofar as it implies he did anything wrong
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn’t give specifics on his policy positions, this seems like something he is particularly interested in.
I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He’s up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
Yes, I just would have emphasized it more. I sort of read it as “yeah this is something you might do if you’re really interested”, while I would more say “this is something you should really probably do”
Mostly agreed, but I do think that donating some money, if you are able, is a big part of being in EA. And again this doesn’t mean reorienting your entire career to become a quant and maximize your donation potential.
Allocate Donation Election Funds by Proportional Representation
All punishment is tragic, I guess, in that it would be a better world if we didn’t have to punish anyone. I guess I just don’t think the fact that SBF on some level “believed” in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA—is a reason that his punishment is more tragic than anyone else’s
This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it’s really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve “global capacity”, and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don’t see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this post—I think you correctly point out that “improving the lives of current humans” is not really what GHW is about!
The non-controversial stuff doesn’t have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldn’t dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesn’t matter whether it’s arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.
This type of thing is talked about from time to time. The unfortunate thing is that there aren’t a ton of plausible interventions. The main tool we have to fight against authoritarianism in the US is lawsuits, and that’s already being done and not any place where EA could have a comparative advantage. The other big thing that people come up with is helping Democrats win elections, and there are people working on this, although (fortunately) elections are really ultimately decided by the voters, campaign tactics have limited effect at least at the national level. Besides this I think the most plausible intervention is probably changing election law at the state level though lobbying/advocacy or petitioning for ballot measures—and even there you’d have to find useful measures that are passible (mandating election counting be done on election night so that there’s less suspicion of fraud? Giving less leeway to election boards so that they aren’t an easy target for theft? score voting?).
PS: voting rights are pretty much a non-issue. The partisan effect of restrictive voting laws is quite small, and if if anything these laws probably hurt Republicans these days because they do better among disengaged voters.