I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
titotal
Most smart and skilled people are outside of the EA/rationalist community: an analysis
The public already has a negative attitude towards the tech sector before the AI buzz. in 2021 45% of americans had a somewhat or very negative view of tech companies.
I doubt the prevalence of AI is making people more positive towards the sector given all the negative publicity over plagarism, job loss, and so on. So I would guess the public already dislikes AI companies (even if they use their products), and this will probably increase.
I want make my prediction about the short-term future of AI. Partially sparked by this entertaining video about the nonsensical AI claims made by the zoom CEO. I am not an expert on any of the following of course, mostly writing for fun and for future vindication.
The AI space seems to be drowning in unjustified hype, with very few LLM projects having a path to consistent profitabilitiy, and applications that are severely limited by the problem of hallucinations and the general fact that LLM’s are poor at general reasoning (compared to humans). It seems like LLM progress is slowing down as they run out of public data and resource demands become too high. I predict gpt-5, if it is released, will be impressive to people in the AI space, but it will still hallucinate, will still be limited in generalisation ability, will not be AGI and the average joe will not much notice the difference. Generative AI will be big business and play a role in society and peoples lives, but in the next decade will be much less transformative than, the introduction of the internet or social media.
I expect that sometime in the next decade it will be widely agreed that AI progress has stalled, that most of the current wave of AI bandwagon jumpers will be quietly ignored or shelved, and that the current wave of LLM hype might look like a financial bubble that burst (ala dotcom bubble but not as big).
Both AI doomers and accelerationists will come out looking silly, but will both argue that we are only an algorithmic improvement away from godlike AGI. Both movements will still be obscure silicon valley things that the average joe only vaguely knows about.
In defense of standards: A fecal thought experiment
I think posts like this exhibit the same thought terminating cancel culture behaviour that you are supposedly complaining about, in a way that is often inaccurate or uncharitable.
For example, take the mention of scott alexander:
It reports, for example, that Scott Alexander attended the conference, and links to the dishonest New York Times smear piece criticizing Scott, as well as a similar hitpiece calling Robin Hanson creepy.
Now, compare this to the actual text of the article:
Prediction markets are a long-held enthusiasm in the EA and rationalism subcultures, and billed guests included personalities like Scott Siskind, AKA Scott Alexander, founder of Slate Star Codex; misogynistic George Mason University economist Robin Hanson; and Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Miri).
Billed speakers from the broader tech world included the Substack co-founder Chris Best and Ben Mann, co-founder of AI startup Anthropic.
Now, I get the complaint about the treatment of robin hanson here, and I feel that “accused of misogyny” would be more appropriate (outside of an oped). But with regards to scott alexander, there was literally no judgement call included.
When it comes to the NYT article, very few people outside this sphere know who he is. Linking to an article about him in one of the most well known newspapers in the world does not seem like a major crime! People linking to articles you don’t like is not cancel culture. Or if it is, then I guess I’m pro cancel culture, because the word has lost all meaning.
It feels like you want to retreat into a tiny, insular bubble where people can freely be horribly unpleasant to each other without receiving any criticism at all from the outside world. And I’m happy for those bubbles to exist, but I have no obligation to host your bubble or hide out there with you.
Imagine I go to a conference, and a guy poops himself deliberately on stage as performance art. It smells a lot and is very unpleasant and I have a sensitive nose.
I announce, publically, that “I don’t like it when deliberately people poop themself on stage. If other places have deliberate pants pooping, I won’t go to them”.
I am 1.) publically stopping going, 2) because of who they associate with (pants poopers) and 3) implying I’ll do that to other people who associate with the same group (pants poopers).
Ergo, according to your logic, I am boycotting, encouraging others to boycott, and “trying to control who people can hang out with”, even if, yknow, I just don’t want to go to conferences where I don’t smell poop.
I have free association, as does everyone else. I don’t like pants shitters, and I don’t like scientific racists (who are on about the same level of odiousness), and I’m free to not host them or hang around them if I want to.
I recognise that a lot of criticism is bad, and I have written a long post on why I think that is. But this is going too far in the other direction.
Spend enough time listening to the criticisms of effective altruism and it becomes clear that, aside from those arguing for small tweaks at the marigns, they all stem from either a) people being very dogmatic and having a worldview that’s strangely incompatible with doing good things (if, for instance, they don’t help the communist revolution); b) people wanting an excuse to do nothing in the face of extreme suffering; or c) people disliking effective altruists and so coming up with some half-hearted excuse for why EA is really something-something colonialism.
All of them? You think literally every person who is not on board with the effective altruism movement is doing so for these three reasons?
EA, as a movement, is miniscule and highly homogenous. Like any group, it will be wrong about a lot of things. I think sentiments like this, dismissing every person who is not on board with the EA movement as some kind of crazy SJW, is epistemological suicide.
Look, I’m a fan of malaria nets and animal welfare EA. I have donated plenty to malaria nets myself. But that is not the entire movement. You can’t just isolate one part of it and ignore the whole “billion dollar fraud” thing, the abuses of power, the mini-cults, the sexism/racism controversies. Or it’s part in building up OpenAI and starting the AI arms race, with all the harms they have brought.
EA is seeking power and influence, and wants to have a large effect on the future of humanity. People are allowed to be concerned about that.
Trying to cancel folks because they spoke at an event but another speaker said a bad thing 15 years ago—that’s an absurd level of guilt by association.
This is a very uncharitable, bordering on dishonest, interpretation of the critics of this event.
Like, even if you’re talking about the guardian article, which definitely has an anti-EA stance, I would describe their main “cancellation” (not a fan of how this word is used) targets as Lightcone and manifest. The charge is that lightcone hosted a conference filled with racist speakers at the lighthaven campus, and that manifest invited said speakers to the conference.
I don’t see them cancelling, say, nate silver, who fills your description of “spoke at the event but another speaker said a bad thing 15 years ago”.
Also, “said a bad thing 15 years ago” is an absurd twisting of the accusations. Hanania said some really, really racist things under a pseudonym up to 2012 (12 years ago, not 15) that he apologises for, but even the OP admits that he still says “distasteful” things today on twitter, and I personally think he’s still pretty racist. And most of the other controversial speakers have never apologised for anything, and plenty of the things they said were recent, like the comments of brian chau.
You say you had 57 speakers (or i guess more that weren’t featured?). An attendee estimates that 8 speakers in lessonline and manifest had scientific racism controversies (with 2 more debatebly adjacent). Obviously this isn’t an exact estimate, but it looks like something on the order of 5-10% of the speakers had scientific racism ties.
What percentage of speakers were African American (or african anything else)? I did not see any of the 30 with pictures on the site, so i’d guess something on the order of 0-3%.
Do you see a problem with a conference that has something like twice or three times as many scientific racist speakers as it does black people speakers?
These speakers are not a representative slice of society. Scientific racists are much, much more rare, and black people are much, much more common. If your goal is a free exchange of ideas, the ideas you are recieving here are vastly skewed in one direction.
The actual effect of this type of speaker list is to push out anti-racists, and encourage more people sympathetic to scientific racism to join your community. I think this is bad!
and the highly controversial rationalist Michael Vassar
Was Vassar a speaker or just an attendee?
In addition to the cult stuff you mentioned, when the time article on sexual harassment in rationalist communities came out, many responses on the article claimed Vassar had been accused of multiple instances of sexual harassment or assault and banned from multiple communities. I got the impression he was no longer around, and am disturbed that he would be allowed in such a conference.
Edit: see the edit in the Op, vassar did not actually attend, but apparently he could have if he wanted to. I would advise everyone to not let this guy attend your conferences.
You’ve caught me stuck in bed, and I’m probably the most EA-critical person that regularly posts here, so I’ll take a stab at responding point by point to your list:
It’s good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
Agree.
It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.
2. Agree.
We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
3. Agree on global poverty and animal welfare, but I think it might be difficult to do “a lot of good” in some catastrophic risk areas.
In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
4. Agreed, although I should note that efforts to better target efforts can have diminishing returns, especially when a problem is speculative and not well understood.
In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)
5. Agreed for global poverty and animal welfare, but I’m mixed on this for speculative causes like AI risk, where theres a decent chance that efforts could backfire and make things worse, and theres no real way to tell until after the fact.
Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
6. Agreed. Unfortunately, EA often fails to live up to this idea.
So it’s good and virtuous to use quantitatively tools and evidence wisely.
7. Agreed, but see above.
GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
8. Agreed, I like givewell in general.
So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
9. Agreed, with regards to the area givewell specialises in.
There’s no good reason to think that GiveWell’s top charities are net harmful.[1]
10. I think the chances that givewells top charities are net good is very high, but not 100%. See the mosquito net fishing for a possible pitfall.
But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
11.Agreed.
Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
12.Agreed, but sometimes being hands on can be helpful with improving lives. For example, being hands on can allow one to more easily recieve feedback and understand overlooked problems with an intervention, and to ensure it goes to the right place. I don’t think voluntourism is good at this, but I would like to see support for more grassroots projects by people actually from impoverished communities.
Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.
13. I agree in principle, but disagree in practice given the “hits based giving” of EA can be pretty bad. The effectiveness of hits based giving very much depends on how much each miss costs and the likely effectiveness of a hit. I don’t think the 100,000 grant for a failed video game was a good idea, nor the $28000 to print out harry potter fanfiction that was free online anyway.
Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
14. This is so broad as to be trivially true, but in practice I often disagree with the judgements here.
The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
15. Generally agree.
Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
16. Anti-capitalist is a pretty broad tent. I agree that some people who adopt that label are dumb and naive, but others have pretty good ideas. I think it would be really dumb if capitalism was still the dominant system 1000 years from now, and there are political interventions that can be predicted to reliably help people. I think “overthrow the government for communism” gets the sideye: “universal healthcare” does not.
In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.
Some people are poor and cannot contribute much without kneecapping themselves. I don’t think those people are useless, and I think for a lot of people political action is a rational choice for how to effectively help. Similarly, some people are very good at political action, but not so good at making large amounts of money, and they should do the former, not the latter.
Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
I agree it provides useful tools. But if you take the tools like expected value too seriously, you end up doing insane things (see SBF). In general EA is way too willing to swallow the math even when it gives bad results.
It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.
Agreed, depending on what you mean by “reasonable”.
Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more
Agreed, with the caveat that we are talking about beings that currently exist or have a high probability of existing in the future.
Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.
The term “fully virtuous agent” raises my eyebrows. I don’t think that’s a thing that can actually exist.
Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
Agreed, with emphasis on the “permissible means”.
Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)
It looks like it, although of course this could be negated if they got their fortunes from more harmful than average means. I don’t see evidence that this is the case for these examples.
Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
agreed
When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
Agreed, although I’ll note that from my perspective, persuading an EAer to donate to AI x-risk instead of givewell will have a similar effect, and should be treated to the same level of scrutiny.
Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
Agreed for some critiques of Givewell/AMF in particular, like the recent time article. However, I don’t think this applies to critiques of AI x-risk, because I don’t think AI x-risk charities are effective. If that turns them away and they donate to oxfam or something instead, that is a net good.
Deliberately or negligently making the world worse is vicious, bad, and wrong.
Agreed.
Most (all?) of us are not as effectively beneficent as would be morally ideal.
Agreed
Our moral motivations are very shaped by social norms and expectations—by community and culture.
Agreed
This means it is good and virtuous to be public about one’s efforts to do good effectively.
Generally agreed.
If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
Agreed
In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
Agreed, but “in principle” is doing a lot of work here. I think the initial Bolshevik party broadly fit this description, for an example of how this could go wrong.
For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
Depends on which community we are talking about. See again: the Bolsheviks.
That’s what the “Effective Altruism” community constitutively aims to do.
agreed.
It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
Agreed on all statements.
Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
Agreed.
Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)
There has definitely been some reflections and changes, many of which I approve of. But it has not been smooth sailing, and I think the response to other scandals leaves a lot to be desired. It remains to be seen as to whether ongoing efforts are enough.
No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
I agree that individuals should not be tarred by SBF, but I don’t think this same protection applies to the movement as a whole. We care about outcomes. If a fringe minority does bad things, those things still occur. SBF conducted one of the largest frauds in history: you don’t see OXFAM having this kind of effect. It’s n=1 for billion dollar frauds, but the n is a lot higher if we consider abuse of power, sexual harrassment, and other smaller harms.
The more power and influence EA amasses, the more appropriate it is to be concerned about bad things within the community.
The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
I think EA has totally flubbed on AI x-risk. Therefore, if I have the choice between recommending them EA in general, or just givewells top charities, doing the latter will be better.
The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
Agreed, but certain types of dickish behaviour are a flaw of the community that has a detrimental effect on it’s health, and make it’s decision making and effectiveness worse.
If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.
Agreed. I generally steer people to givewell or it’s charities, rather than
None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)
I think some of the claims are less valuable outside of utilitarianism, but whatever.
With that all answered, let me add my own take on why I don’t recommend EA to people anymore:
I think that the non-speculative side of EA (global poverty and animal welfare) is nice and good, and is on net making the world a better place. I think the speculative side of EA, and in particular AI risk, contains some reasonable people but also enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory.
Most of this bad thinking originates from the Rationalist community, which is generally a punchline in wider intellectual circles. I think the Rationalist community is on the whole epistemically atrocious and overconfident on things for baffling reasons, and tend to be hero worshipping and spread a lot of factually dubious ideas with very poor justification. I find some of the heroes they adore to be unpleasant people who spread harmful norms, ideas, and behaviour.
Putting it all together, I think that overall EA is a net positive, but that recommending EA is not the most positive thing you can do. Attacking the bad parts of EA while acknowledging that malaria nets are still good seems like a completely rational and good thing to do, either to put pressure on EA to improve, or to provide impetus for the good parts of EA to split off.
Thanks for those links, this is an interesting topic I may look into more in the future.
Another thing is that, if you look at what a single consumer GPU can do when it runs an LLM or diffusion model… well it’s not doing human-level AGI, but it’s sure doing something, and I think it’s a sound intuition (albeit hard to formalize) to say “well it kinda seems implausible that the brain is doing something that’s >1000× harder to calculate than that”.
It doesn’t seem that implausible to me. In general I find the computational power required for different tasks (such as what I do in computational physics) frequently varies by many orders of magnitude. LLMs get to their level of performance by sifting through all the data on the internet, something we can’t do, and yet still perform worse than a regular human on many tasks, so clearly theres a lot of extra something going on here. It actually seems kind of likely to me that what the brain is doing is more than 3 orders of magnitude more difficult.
I don’t know enough to be confident on any of this, but If AGI turns out to be impossible on silicon chips with earths resources, I would be surprised but not totally shocked.
As for the latter, I think (or at least, I hope!) that there’s wide consensus that whatever human brains do (individually and collectively), it is possible in principle for algorithms-running-on-chips to do those same things too. Brains are not magic, right?
I think this is probably true, but I wouldn’t be 100% certain about it. Brains may not be magic, but they are also very different physical entities to silicon chips, so there is no guarantee that the function of one could be efficiently emulated by the other. There could be some crucial aspect of the mind relying on a physical process which would be computationally infeasible to simulate using binary silicon transistors.
If there are any neuroscientists who have investigated this I would be interested!
I think the factor missing here is the matter of when pushing for a pause is appropriate.
Like, imagine a (imo likely) scenario where a massive campaign gets off, with a lot of publicity behind it, to try and prevent GPT-5 from being released on existential risk grounds. It fails, and GPT-5 is released anyway , and literally nothing majorly bad happens. And then the same thing happens for gpt-6 and gpt-7.
In this scenario, the idea of pausing AI could easily become a laughing stock. Then when an actually dangerous AI comes out, the idea of pausing is still discredited, and you’re missing a tool when you really actually need it.
Even if I believed the risk of overall doom was 5% (way too high imo), I wouldn’t support the pause movement now, I’d want to wait on advocating a pause until there was a significant chance of imminent danger.
I have no problem with AI/machine learning being used in areas where the black box nature does not matter very much, and the consequences of hallucinations or bias are small.
My problem is with the idea of “superhuman governance”, where unaccountable black box machines make decisions that affect peoples lives significantly for reasons that cannot be dissected and explained.
Far from preventing corruption, I think this is a gift wrapped opportunity for the corrupt to hide their corruption behind the veneer of a “fair algorithm”. I don’t think it would be particularly hard to train a neural network to appear to be neutral while actually subtly favoring one outcome or the other, by manipulating the training data or the reward function. There would be no way to tell this manipulation occurred in the code, because the actual “code” of a neural network involves multiplying ginormous matrices of inscrutable numbers.
Of course, the more likely outcome is that this just happens by accident, and whatever biases and quirks occurred by accident due to inherently non-random data sampling get baked into the decisions affecting everybody.
Human decision making is spread over many, many people, so the impact of any one person being flawed is minimized. Taking humans out of the equation reduces the number of points of failure significantly.
This seems like a generally bad idea. I feel like the entire field of algorithmic bias is dedicated to explaining why this is generally a bad idea.
Neural networks, at least at the moment, are for the most part functionally black boxes. You feed in your data (say, the economic data of each state), and it does a bunch of calculations and spits out a recommendation (say, the funding you should allocate). But you can’t look inside and figure out the actual reasons that, say, Florida got X$ in funding and alaska got Y$. It’s just “what the algorithm spit out”. This is all based on your training data, which can be biased or incorrect.
Essentially, by relegating things to inscrutable AI systems, you remove all accountability from your decision making. If a person is racially biased, and making decisions on that front, you can interrogate them and analyse their justifications, and remove the rot. If an algorithm is racist, due to being trained on biased data, you can’t tell (you can observe unequal outcomes, but how do you know it was a result of bias, and not the other 21 factors that went into the model?).
And of course, we know that current day AI suffers from hallucinations, glitches, bugs, and so on. How do you know that a decision was made genuinely, or was just a glitch somewhere in the giant neural network matrix?
Rather than making things fairer and less corrupt, it seems like this just concentrates power in whoever is building the AI. Which also makes it an easier target for attacks by malevolent entities, of course.
I think you’re trying way too hard to rescue a term that just kinda sucks and should probably be abandoned. There is no way to reliably tell in advance if a try is “the first critical try”: we can only tell when a try is not critical, if an AI rebels and is defeated. Also, how does this deal with probabilities? does it kick in when the probability of winning is over 50%? 90%? 99%?
The AI also doesn’t know reliably whether a try is critical. It could mistakenly think it can take over the world when it can’t, or it could be overcautious thinking it can’t take over the world when it can. In the latter case, you could succeed completely on your “first critical try” while still having a malevolent AI that will kill you a few tries later.
The main effect seems to be an emotive one, by evoking the idea that “we have to get it right on the first try”. But the first “critical try” could be version number billion trillion, which is a lot less scary.
I do like your decisive strategic advantage term, I think it could replace “first critical try” entirely with no losses.
While I don’t agree with a lot of Torres beliefs and attitudes, I don’t agree with this article that concerns about EA extremism are unwarranted. Take the stance on SBF, for example:
It’s true that Sam Bankman-Fried, an effective altruist Jane Street employee, went on to commit an enormous fraud — but the fraud was universally condemned by members of the effective altruist community. People who do evil things exist in every sufficiently large social movement; it doesn’t mean that every movement recommends evil.
Yes, SBF does not represent the majority of EA’s, but he still conducted one of the largest frauds in history, and it’s unlikely he would have counterfactually done this without EA existing. Harmful, extremist EA-motivated actions clearly have happened, and they were not confined to a few randos on message boards, but contained actual highly influential and respected EA figures.
Extremism might be in the minority, but it’s still a real concern if there’s a way to translate that extremism into real world harm, as happened with SBF.
I think this is especially important with AI stuff. Now, I don’t believe in the singularity, but many EA’ers do, and some of them are setting out to build what they believe to be a god-like AI. That would be a lot of power concentrated into the people that build that. If they are extremist, flawed, or have bad values, those flaws could be locked in for the rest of time. Even if (more likely) the AI is just very powerful rather than god-like, a few people could still have a significant effect on the future. I think this more than justifies increased scrutiny of the flaws in EA values and thinking.
I agree mostly with the article, but I think truth-seeking should take into account the large fallibility of the movement. For example:
On the negative side: I can make an argument for any given inclusion or exclusion on the 80,000 hours job board, but I’m certain the overall gestalt is too normal. When I look at the list, almost every entry is the kind of things that any liberal cultivator parent would be happy to be asked about at a dinner party. Almost all of the remaining (and most of the liberal-cultivator-approved) jobs are very core EA. I don’t know what jobs in particular are missing but I do not believe high impact jobs have this much overlap with liberal cultivator parent values.
I don’t see the problem with this. Ideas like “we should stop poor people dying of preventable illnesses” are robust ideas that have stood the test of time and scrutiny, and the reason most people are on board with them is because they are correct and have significant evidence backing them up.
Conversely, “weirder” ideas have significantly less evidence backing them up, and are often based on shaky assumptions or controversial moral opinions. The most likely explanation for a weird new idea not being popular is that it’s wrong.
If you score “truth-seeking” by being correct on average about the most things, then a strategy of “agree with the majority of subject level scientific experts in every single field” is extremely hard to beat. I guess the hope is by encouraging contrarianism, you can find a hidden gem that pays off for everything else, but there is a cost to that.
Sorry to derail, but I’m a physicist in a related field who’s been reading up on this, and I’m not sure I agree with this characterization.
The issue with quantum physics is that it’s not that hard to “grok” the recipe for actually making quantum predictions within the realms we can reasonably test. It’s a simple two step formula of evolving the wavefunction and then “collapsing” it, and you could probably do it in a afternoon for a simple 1D system. All the practical difficulty comes from mathematically working with more complex system and solving the equations efficiently.
The interpretations controversy comes from asking why the recipe works, a question almost all quantum physicists avoid because there is as of yet no way to distinguish different interpretations experimentally (and also the whole thing is incompatible with general relativity anyway). Basically every interpretation requires biting some philosophical bullet that other people think is completely insane.
I very much doubt that Carroll is “deeply satisfied” with MWI, although he does think it’s probably true. MWI creates a ton of philosophical problems about identical clones, identity, and probability, Carroll has made attempts to address this but IMO the solution is rather weak.
I haven’t read up much on the consciousness debate, but it seems like it could end up in a similar place: everybody agreeing on the experimentally observable results, but unable to agree on what they mean.