I think this is a valid concern, but I think itâs important to note that if Richard were a left-winger, this same concern wouldnât be there.
Marcus Abramovitch đ¸
The forum likes to catastrophize Trump but I need to point out a few things for the sake of accuracy since this is very misleading and highly upvoted.
The current administration has done many things that I find horrible, but I donât see any evidence of an authoritarian takeover. Being hyperbolic isnât helpful.
Your Manifold question is horribly biased because you are the author and made it very biased. First, there is your bias in how you will resolve the question. Second, the wording of the question comes off as incredibly biased. For example, saying that Bush v Gore counts as a coup or âAnything that makes the person they try to put in power illegitimate in my judgment,â. Your judgment is doing a lot of heavy lifting there.
I think itâs important to quantify this supposed incentive. Needless to say, I think itâs very low.
I donât think it matters much but I am Manifoldâs former #1 trader until I stopped and Iâm fairly well regarded as a forecaster.
On this note, Iâm happy that in CEAâs new post, they talk about building the brand of effective altruism
I understand why people shy away from/âhide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that harms the movement in a way where the costs are externalized.
When PG refers to keeping your identity small, he means donât defend it or its characteristics for the sake of it. Thereâs nothing wrong with being a C/âC++ programmer, but realizing itâs not the best for rapid development needs or memory safety. In this case, you can own being an EA/âyour affiliation to EA and not need to justify everything about the community.
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and donât want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people youâd be happy to be associated with.
Iâm a proud EA.
I think we feel this more than is the case. I think a lot of people know about it but donât have much of an opinion on it, similar to how I feel about NASCAR or something.
I recently caught up with a friend who worked at OpenAI until very recently and he thought it was good that I was part of EA and what I did since college.
Didnât mean to imply you were, sorry if it came off that way.
Yup, I agree. But I think most people donât care as much about political outcomes as they purport to, based on their actions. I think a lot of that is social desirability bias.
I also donât think itâs that clear that Kamala is obviously the better pick or that Trump being President over Kamala is worth $1-10T of value. I like this comment about the better choice for President being non-obvious.
I agree that we should judge the actions ex ante. I also agree that (you are implying this I think) you have to start early and do good thinking in order to be effective here. 3 months before the election is too late. We had to get on this years before the election and the most effective solutions will look like getting good, sound, authentic, moderate candidates into the running or paying Biden $100mm to committ to not running.
I think if you went to say Reid Hoffman and Mark Cuban and others and said âitâs going to cost $10B, $10B and we flip this election.â, I think they would probably put in ~100mm (maybe less tbh) each, personally and go pretty gung-ho to get you to $10B.
The main problem is that I just donât think you can turn money into votes through advertising past a point. I think you need to actually just pay people (which is illegal) and then you can flip votes. But for the vast majority of people, showing them more ads just does nothing. Thereâs even some evidence that it turns people off.
Iâm not going to scrutinize your calculations. I think you realize that you donât know in advance how many votes you need and where and that perhaps $1k/âvote flip is on the margin and once you pick that fruit, it gets a lot harder and that you donât have good accuracy on which votes you flip (even in a model where you do get to pay $1k/âvote flip, most of the time that vote flip just happens in some random unimportant state). Thus, you basically got this advantage where the math looks great due to the importance of certain states due to the electoral college but that advantage gets effectively undone because you donât get to perfectly target the states you want.
I would greatly expect that once everything is accounted for, donating EA money to politics wonât be cost effective but I like that youâre thinking about this and realize that itâs not going to be âEA moneyâ predominantly.
Iâm fairly skeptical that more money would really have done anything. I understand that politics should get more money than almonds. But I think that would mainly be done by giving money to both sides.
As an exercise, what should Kamala have spent more money on if she had it? She had name recognition. I donât think anyone in the country was unsure about who she was. I think itâs really hard to come up with things more money would have done for the Democrats. The real thing you need to do is not purchasable with money; you need to make Democrat policies work better for people. I think Ezra Klein is onto something here really.
Iâve written about this elsewhere, but it is far less constructive when you come at everything with a mindset where you assume malicious intent and find corroborating evidence.
Again, native English speakers sometimes make grammar/âspelling mistakes. Grammar in your non-native language is harder for a variety of reasons. One thing to at least consider is that words such as âin, by, on, untilâ donât often translate perfectly, or kinda mean different things depending on context. I speak English native/âfluently. When I speak in French or Spanish (where Iâm proficient-fluent), I definitely make mistakes all the time, precisely because I am doing a lot of translating to/âfrom English and not thinking in the language. Hereâs a simple example I came up with in English-Spanish
See how âinâ and âbyâ both get translated to âenâ. I probably would use different phrasing than Google Translate, but it wouldnât shock me if Sinergia people are using Google Translate (or similar), frequently. Itâs exhausting to speak/âwork in your non-native language, and there are all these tiny phrasings that are difficult. Now multiply this by every row/âcolumn in the Google Sheet and every claim, etc.
Itâs good that you are reviewing this work, and my offer still stands to pay you for future reviews you want to do in good faith. We need far more rigor on cost-effectiveness analyses, and EA often has a culture where we are too nice to each other to call things out and get defensive about object-level criticism. I think they have gotten better in the last couple of years, but I was fairly unhappy with ACEâs cost-effectiveness methods a few years ago, and so I want their work reviewed, checked, and questioned, and perhaps even re-done. But for criticism to be taken well and without defensiveness, you canât come out fully on the offensive, accuse people of lying and malicious intent everywhere.
ACE clearly made a mistake by leaving column W published in the public view. Iâm sure they would actually give you everything unredacted if you asked and were nice about it! But you need to get out of the mindset of doing a charity âtakedownâ as opposed to a charity review. It wouldnât surprise me if many organizations are slightly optimistic in taking credit for things or are a bit generous in their counting. Correcting this is great. It gives us better info/âdata from which to make decisions. If it does turn out that some charities are way off the mark, Iâm sure some will be a bit defensive but others will actually want to switch their work.
Here is an example of @Vasco Grilođ¸ doing a pretty good critique of Sinergia that they should be trying to focus on their cage-free campaigning as opposed to meal replacement. That is extremely useful. Itâs particularly useful because itâs something that @Carolina GalvaniâSinergia Animal can engage with, doesnât assume Sinergia is lying, and additional reasons can then be given for why Sinergia might still want to do something etc.
Iâm a bit confused what you are arguing for. Do you want the wealthy to have even more influence on our politics? Is this just a supposition that Kamala Harris would have been better than Trump (I agree) and thus it would have been good for billionaires to donate for this reason?
I should have made my response clearer. I am suggesting a few things.
It seems that on a relative basis, billionaires are on average, about as or more charitable than EAs. I think this is a sign that EAs should be far more charitable.
I think the fact that even EAs donât seem to donate very much suggests that it actually is very hard to get people to donate significant percentages of their income/âwealth.
I think itâs going to be quite hard to convince others (in this case, billionaires) that they should be donating significant sums of their income/âwealth when we donât. Itâs just too easy to dismiss people as hypocrites at first glance or that we donât really believe it.
I think I just donât see that most EAs are taking very low salaries. Many (most?) make comparable salaries to what they would make in industry, some more, some less. I donât think EA salaries are particularly low in general.
I think you are usually an insightful, reasonable, and truthful commenter on the forum and off the forum. That said, I think there are a few errors and important facts on this topic that are omitted.
This is the Gini coefficient (measurement of inequality) in the United States over the last 20 years (the country I expect you are talking about).
Here it is for several more countries where EAs predominantly live.
I donât know why it âseemsâ like inequality is getting worse. I think a lot of that has to do with news coverage and such. But Gini coefficients are flat in most of the world over this time and going down (towards less inequality) in a few countries.
With respect to donations, I again want to point out that EAs themselves donât donate very much money. This link is from 2019, I donât have more recent data, but I expect that the trend has gone down significantly since then (i also think there is a good chance that these surveys overestimate donations) as there has been less emphasis on earning to give in the community. I understand that the majority of EAs arenât billionaires, but they do often earn a significant amount of money, definitely enough to put them in the global top 5% and often the top 1%. The median EA donates something like 3%. These are people who self-identify as charitable.
On the power of the ultra-wealthy, I expect some of this is coming from Elon Muskâs power, but keep in mind that the majority of billionaires supported the candidate that lost the election. Iâm not sure which measure would have billionaires being more powerful than previous years (unless of course that there are more of them since the world is getting richer + inflation).
Vetted causes, I agree with you that Sinergia shouldnât be deleting column W, especially at this time when this is happening. I think they should put it back up and add more explanatory comments if necessary.
That said, I think you are perhaps assuming too much bad faith in this whole ordeal. It seems extremely plausible to me that an ESL person would confuse a small word like this and I think you are coming at this from a perspective where you are assuming mal-intent and are finding corroborating evidence for this.
This is generally less than one FTE for an AI safety organization. Remember, there are other costs than just salary.
MATS is spending far more than ÂŁ500k/âyear. I donât know how accurate it is but it looks like they might have spent ~$4.65MM. Iâm happy to be corrected but I think my figure it more accurate.
The other two things I want to point out are:
Itâs very tempting to be biased towards âthe thing I should be doing is making moneyâ. Iâve seen a shocking amount of E2Gers that donât seem to do much giving, particularly in AI safety. There should be a small anti-correction bias against the thing you should be doing is making money and investing it to earn more money. That looks a lot like selfish non-impact.
ÂŁ250k/âyear after taxes and expenses, just isnât that much to donate. I think in the UK (where ÂŁ250k/âyear would be paid) would incur income tax of ~35-40% depending on deductions. Letâs call it ÂŁ95k. After say ÂŁ45k/âyear in personal expenses (more if you have a family), we are talking about about ÂŁ110k/âyear. Invested or not, this just isnât that much money to move the needle on AI safety by enough to write home about. AI governance organizations would very happily have a very good mid to senior operations management roles at EA and adjacent organisations with longtermist focus or other role. These orgs spend ÂŁ110k/âyear like its nothing.
I think this effect is completely overshadowed by the fact if what you are saying is true, we have 5-10 years on the technical alignment/âgovernance of AI to get things to go well.
Now is the time to donate and work on AI safety stuff. Not to get rich and donate to it later in the hopes that things worked out.
From an outsider perspective, this looks like the sort of thing that almost anyone could get started on and I like the phrasing you used to signal that. AI progress moves so fast that you are most likely going to the only one looking at something and so you can do very basic things like
âHow deterministic are these models? If you take the first K lines of the CoT and regenerate it, do you get the same output?â
Itâs pretty easy to imagine taking 1 line of CoT and regenerating and then 2 lines...
I think a lot of people can just do this and getting to do it under Neel Nanda is likely to lead to a high quality paper.
In common English parlance, we donât preface everything with âI have estimates that state...â.
I donât think any reasonable person thinks that they mean that if they got an extra $1, theyâd somehow pay someone for 10 minutes of time to lobby some tiny backyard farm of about 1770 pigs to take on certain oractices. You get to these unit economics with a lot more nuance.
Itâs eag weekend. I would give at least a week before rushing to a judgement.
I think 1, 3, and 4 are all possible.
Trump and crew spout millions of lies. Itâs very common at this point. If you get worked up about every one of these, youâre going to lose your mind.
Look, Iâm not happy about this Trump stuff either. Itâs incredibly destabilizing for many reasons. But you are going to lose focus on important things if you get swept up into the daily Trump news. If you are focused on AI safety or animal welfare or poverty or whatever it may be, your most effective thing will almost certainly be focusing on something else.