Pronouns: she/âher or they/âthem.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now Iâm trying to figure out where effective altruism can fit into my life these days and what it means to me.
Yarrowđ¸
AGI by 2028 is more likely than not
I gave a number of reasons I think AGI by 2030 is extremely unlikely in a post here.
Hereâs the link to the original post: https://ââepochai.substack.com/ââp/ââthe-case-for-multi-decade-ai-timelines
One important point in the post â illustrated with the example of the dot com boom and bust â is that itâs madness to just look at a trend and extrapolate it indefinitely. You need an explanatory theory of why the trend is happening and why it might continue or why it might stop. In the absence of an explanatory understanding of what is happening, you are just making a wild, blind guess about the future.
(David Deutsch makes this point in his awesome book The Beginning of Infinity and in one of his TED Talks.)
A pointed question which Ege Erdil does not ask in the post, but should: is there any hard evidence of AI systems invented within the last 5 years or even the last 10 years doing any labour automation or any measurable productivity augmentation of human workers?
I have looked and I have found very little evidence of this.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems to me like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat qualityâRR [resolution rate] and customer satisfactionâsuggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
If the amount of labour automation or productivity improvement from LLMs is zero or negative, then naively extrapolating this trend forward would mean full labour automation by AI is an infinite amount of time away. But of course Iâve just argued why these kinds of extrapolations are a mistake.
It continually strikes me as odd that people write 3,000-word, 5,000-word, and 10,000-word essays on AGI and donât ask fundamental questions like this. Youâd think if the trend you are discussing is labour automation by AI, youâd want to see if AI is automating any labour in a way we can rigorously measure. Why are people ignoring that obvious question?
Nvidia revenue is a really bad proxy for AI-based labour automation or for the productivity impact of AI. Itâs a bad proxy for the same reason capital investment into AI would be a bad proxy. It measures resources going into AI (inputs), not resources generated by AI (outputs).
- May 3, 2025, 1:40 PM; 5 points) 's comment on Why I am Still SkepÂtiÂcal about AGI by 2030 by (
LLMs seem to be bringing down the costs of software.
Are you aware of hard data that supports this or is this just a guess/âgeneral impression?
Iâve seen very little hard data on the use of LLMs to automate labour or enhance worker productivity. I have tried to find it.
One of the few pieces of high-quality evidence Iâve found on this topic is this study: https://ââacademic.oup.com/ââqje/ââarticle/ââ140/ââ2/ââ889/ââ7990658 It looked at the use of LLMs to aid people working in customer support.
The results are mixed, suggesting that in some cases LLMs may decrease productivity:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat qualityâRR and customer satisfactionâsuggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
Anecdotally, what Iâve heard from people who do coding for a job is that AI does somewhat improve their productivity, but only about the same as or less than other tools that make writing code easier. Theyâve said that the LLM filling in the code saves them the time they would have otherwise spent going to Stack Overflow (or wherever) and copying and pasting a code block from there.
Based on this evidence, I am highly skeptical that software development is going to become significantly less expensive in the near term due to LLMs, let alone 10x or 100x less expensive.
One of the best comments Iâve ever read on the EA Forum! I agree on every point, especially that making up numbers is a bad practice.
I also agree that expanding the reach of effective altruism (including outreach and funding) beyond the Anglosphere countries sounds like a good idea.
And I agree that the kind of projects that get funded and supported (and the kind of people who get funded and supported) seems unduly biased toward a Silicon Valley worldview.
I believe Bob Jacobs is a socialist, although I donât know what version of socialism he supports. âSocialismâ is a fraught term and even when people try to clarify what they mean by it, sometimes it still doesnât get less confusing.
Iâm inclined to be open-minded towards Bobâs critiques of effective altruism, but I get the sense that his critiques of EA and his ideas for reform are going to end up being a microcosm of socialist or left-wing critiques of society at large and socialist or left-wing ideas for reforming society.
My thought on that is summed up in the Beatlesâ song âRevolutionâ:
You say you got a real solution, well, you know
Weâd all love to see the plan
In principle, democracy is good, equality is good, less hierarchy is better than more hierarchy, not being completely reliant on billionaires and centimillionaires is good⌠But I need to know some more specifics on how Bob wants to achieve those things.
I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030. In 2015, it was a little more plausible that AGI could arrive by 2030 because that was 15 years away and who knows what can happen in 15 years.
Now that 2030 is a little less than 5 years away, AGI by 2030 is a less plausible prediction than it was in 2015 because thereâs less time left and itâs more clear it wonât happen.
I donât think the reasons people believe AGI will arrive by 2030 are primarily based on evidence but are primarily a sociological phenomenon. People were ready to believe this regardless of the evidence going back to Ray Kurzweilâs The Age of Spiritual Machines in 1999 and Eliezer Yudkowskyâs âEnd-of-the-World Betâ in 2017. People donât really pay attention to whether the evidence is good or bad, they ignore obvious evidence and arguments against near-term AGI, and they mostly make a choice to ignore or attack people who express disagreement and instead tune into the relentless drumbeat of people agreeing with them. This is sociology, not epistemology.
Donât believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
Expert opinion has changed? First, expert opinion is not itself evidence, itâs peopleâs opinions about evidence. What evidence are the experts basing their beliefs on? That seems way more important than someone just saying a number based on an intuition.
Second, expert opinion does not clearly support the idea of near-term AGI.
As of 2023, the expert opinion on AGI was⌠well, first of all, really confusing. The AI Impacts survey found that the experts believed there is a 50% chance by 2047 that âunaided machines can accomplish every task better and more cheaply than human workers.â And also that thereâs a 50% chance that by 2116 âmachines could be built to carry out the task better and more cheaply than human workers.â I donât know why these predictions are 69 years apart.
Regardless, 2047 is sufficiently far away that it might as well be 2057 or 2067 or 2117. This is just people generating a number using a gut feeling. We donât know how to build AGI and we have no idea how long it will take to figure out how to. No amount of thinking of numbers or saying numbers can escape this fundamental truth.
We actually wonât have to wait long to see that some of the most attention-catching near-term AI predictions are false. Dario Amodei, the CEO of Anthropic (a company that is said to be âliterally creating Godâ), has predicted that by some point between June 2025 and September 2025, 90% of all code will written by AI rather than humans. In late 2025 and early 2026, when itâs clear Dario was wrong about this (when, not if), maybe some people will start to be more skeptical of attention-grabbing expert predictions. But maybe not.
There are already strong signs of AGI discourse being irrational and absurd. On April 16, 2025, Tyler Cowen claimed that OpenAIâs o3 model is AGI and asked, âis April 16th AGI day?â. In a follow-up post on April 17, seemingly in response to criticism, he said, âI donât mind if you donât want to call it AGIâ, but seemed to affirm he still thinks o3 is AGI.
On one hand, I hope that in 5 years the people who promoted the idea of AGI by 2030 will lose a lot of credibility and maybe will do some soul-searching to figure out how they could be so wrong. On the other hand, there is nothing preventing people from being irrational indefinitely, such as:
Defining whatever exists in 2030 as AGI (Tyler Cowen already did it in 2025, and Ray Kurzweil innovated the technique years ago).
Kicking the can down the road a few years, and repeat as necessary (similar to how Elon Musk has predicted that the Tesla fleet will achieve Level 4â5 autonomy in a year every year from 2015 to 2025 and has not given up the game despite his losing streak).
Telling a story in which AGI didnât happen only because effective altruists or other good actors successfully delayed AGI development.
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic. (Sorry!) So, Iâm being blunt about this to try to change that a little.
Good Ventures have stopped funding efforts connected with the rationality community and rationality,
This confused me at first until I looked at the comments. The EA Forum post you linked to doesnât specifically say this. The Good Ventures blog post that forum post links to doesnât specifically say this either. I think you must be referring to the comments on that forum post, particularly between Dustin Moskovitz (who is now shown as â[anonymous]â) and Oliver Habryka.
There are three relevant comments from Dustin Moskovitz, here, here, and here. These comments are oblique and confusing (and he seems to say somewhere else heâs being vague on purpose). But I think Dustin is saying that heâs now wary of funding things related to âthe rationalist communityâ (defined below).
Edit (2025-05-03 at 12:59 UTC): To make it easier to see which comments on that post are Dustin Moskovitzâs, you can use the Wayback Machine.Dustin seems to indicate there are multiple reasons he doesnât want to fund things related to âthe rationalist communityâ anymore, but he doesnât fully get into these reasons. From his comments, these reasons seem to include both a long history of problems (again, kept vague) and the then-recent Manifest 2024 conference that was hosted at Lighthaven (the venue owned by Lightcone Infrastructure, the organization that runs the LessWrong forum, which is the online home of âthe rationalist communityâ). Manifest 2024 attracted negative attention due to the extreme racist views of many of the attendees.
We need to differentiate between âcapital Râ Rationality and âsmall râ rationality. By âcapital Râ Rationality, I mean the actual Rationalist community, centered around Berkeley...
On the other hand, âsmall râ rationality is a more general concept. It encompasses the idea of using reason and evidence to form conclusions, scout mindset, and empiricism. It also includes a quest to avoid getting stuck with beliefs resistant to evidence, techniques for reflecting on and improving mental processes, and, yes, many of the core ideas of Rationality, like understanding Bayesian reasoning.
I think the way you tried to make this distinction is not helpful and actually adds to the confusion. We need to distinguish two very different things:
The concept of rationality as it has historically been used for centuries and which is what the vast majority of people on Earth still associate the word ârationalityâ with today. This older and more universal concept of rationality is discussed in places like the Wikipedia article for rationality and in academic philosophy. Rationality in this sense is usually considered synonymous with âreasonâ, as in âreasoningâ. You could also try to define rationality as âgood thinkingâ or, as Steven Pinker defines it in an article for Encyclopedia Britannica, as âthe use of knowledge to attain goals.â
The specific worldview, philosophy, lifestyle, or subculture that people on LessWrong and a small number of people in the San Francisco Bay Area call ârationalismâ. (Wikipedia calls this âthe rationalist communityâ.)
The online and Bay Area-based ârationalist communityâ (2) tends to believe it has especially good insight into older, more universal concept of rationality (1) and that self-identified ârationalistsâ (2) are especially good at being rational or practicing rationality in that older, more universal sense (1). Are they?
No.
Calling yourselves ârationalistsâ and your movement or community ârationalismâ is just a PR move, and a pretty annoying one at that. Itâs annoying for a few reasons, partly because itâs arrogant and partly because it leads to confusion like the confusion in this post, where the centuries-old and widely-known concept of rationality (1) gets conflated with an eccentric, niche community (2). It makes ancient, universal terms like ârationalâ and ârationalityâ contested ground, with this small group of people with unusual views â many of them irrational â staking a claim on these words.
By analogy, this community could have called itself âthe intelligence movementâ or âthe intelligence communityâ. Its members could have self-identified as something like âintelligent peopleâ or âaspirationally intelligent peopleâ. That would have been a little bit more transparently annoying and arrogant.
So, is Good Ventures or effective altruism ever going to disavow or distance itself from the ancient, universal concept of rationality (1)? No. Absolutely not. Never. That would be absurd.
Has Good Ventures disavowed or distanced itself from LessWrong/âBay Area ârationalismâ or âthe rationalist communityâ (2)? I donât know, but those comments from Dustin that I linked to above suggest that maybe this is this case.
Will effective altruism disavow or distance itself from LessWrong/âBay Area ârationalismâ or âthe rationalist communityâ (2)? I donât know. I want this to happen because I think âthe rationalist communityâ (2) decreases the rationality (1) of effective altruism. The more influence the LessWrong/âBay Area ârationalistâ subculture (2) has over effective altruism, the less I like effective altruism and the less I want to be a part of it.
If Dustin and Good Ventures are truly done with âthe rationalist communityâ (2), that sounds like good news for Dustin, for Good Ventures, and probably for effective altruism. Itâs a small victory for rationality (1).
Unimportant note: I made some edits to this comment for clarity on 2025-05-03 at 09:37 UTC, specifically the part about Dustin Moskovitz. I was a bit confused trying to read my own comment nine days later, so I figured I could improve the clarity. The edits donât change the substance of this comment. You can see the previous version of the comment in the Wayback Machine, but donât bother, thereâs really no point.
- May 3, 2025, 12:38 PM; -4 points) 's comment on Oliver Habryka on OpenPhil and GoodVentures by (
This story might surprise you if youâve heard that EA is great at receiving criticisms. I think this reputation is partially earned, since the EA community does indeed engage with a large number of them. The EA Forum, for example, has given âCriticism of effective altruismâ its own tag. At the moment of writing, this tag has 490 posts on it. Not bad.
Not only does EA allow criticisms, it sometimes monetarily rewards them. In 2022 there was the EA criticism contest, where people could send in their criticisms of EA and the best ones would receive prize money. A total of $120,000 was awarded to 31 of the contestâs 341 entries. At first glance, this seems like strong evidence that EA rewards critiques, but things become a little bit more complicated when we look at who the winners and losers were.
After giving it a look, the EA Criticism and Red Teaming Contest is not what I would describe as being about âcriticism of effective altruismâ, either in terms of what the contest asked for in the announcement post or in terms of what essays ended up winning the prizes. At least not mostly.
When you say âcriticism of effective altruismâ, that makes me think of the sort of criticism that a skeptical outsider would make about effective altruism. Or that it would be about the kind of thing that might make a self-identified effective altruist think less of effective altruism overall, or even consider leaving the movement.
Out of 31 essays that won prizes, only the following four seem like âcriticism of effective altruismâ, based on the summaries:
âEffective altruism in the garden of endsâ by Tyler Alterman (second prize)
âNotes on effective altruismâ by Michael Nielsen (second prize)
âCritiques of EA that I want to readâ by Abraham Rowe (honourable mention)
âLeaning into EA Disillusionmentâ by Helen (honourable mention)
The essay âCriticism of EA Criticism Contestâ by Zvi (which got an honourable mention) points out what Iâm pointing out, but I wouldnât count this one because it doesnât actually make criticisms of effective altruism itself.
This is not to say anything about whether the other 27 essays were good or bad, or whether the contest was good or bad. Just that I think this contest was mostly not about âcriticisms of EAâ.
I donât know the first thing about American non-profit law, but a charity turning into a for-profit company seems like it canât possibly be legal, or at least it definitely shouldnât be.
I think it was a great idea to transition from a full non-profit (or whatever it was â OpenAIâs structure is so complicated) to spinning out a capped profit for-profit company that is majority owned by the non-profit. Thatâs an exciting idea! Let investors own up to 49% of the for-profit company and earn up to a 100x return on their investment. Great.
(Edited on 2025-05-06 at 05:25 UTC: I found an article that claimed the OpenAI non-profit only owned 2% of the OpenAI company. I donât know if this is true. I canât find clear information on how much of the company the non-profit currently owns or has owned in the past.)
Maybe more non-profits could try something similar. Novo Nordisk, the company that makes semaglutide (sold under the brand names Ozempic, Rybelsus, and Wegovy) is majority controlled by a non-profit, the Novo Nordisk Foundation. It seems like this model sometimes really works!
But to now give majority ownership and control of the for-profit OpenAI company to outside investors? How could that possibly be justified?
Is OpenAI really not able to raise enough capital as is? Crunchbase says OpenAI has raised $62 billon so far. I guess Sam Altman wants to raise hundreds of billions if not trillions of dollars, but, I mean, is OpenAIâs structure really an obstacle there? I believe OpenAI is near or at the top of the list of private companies that have raised the most capital in history. And the recent funding round of $40 billion, led by SoftBank, is more capital than many large companies have raised through initial public offerings (IPOs). So, OpenAI has raised historic amounts of capital and yet it needs to take majority ownership away from the non-profit so it can raise more?
This change could possibly be legally justified if the OpenAI non-profitâs mission had been just to advance AI or something like that. Then I guess the non-profit could spin out startups all it wants, similar to what New Harvest has done with startups that use biotech to produce animal-free animal products. But the OpenAI non-profitâs mission was explicitly to put the development of artificial intelligence and artificial general intelligence (AGI) under the control of a non-profit board that would ensure the technology is developed and deployed safely and that its benefits are shared equitably with the world.
I hope this change isnât allowed to happen. I donât think AGI will be invented particularly soon. I donât think, contra Sam Altman, that OpenAI knows how to build AGI. And yet I still donât think a charity should be able to violate its own mission like this, for no clear social benefit, and when the for-profit subsidiary seems to be doing just fine.
Metaculus accepts predictions from just anybody, so Metaculus is not an aggregator of expert predictions. Itâs not even a prediction market.
I donât have to tell you that scaling inputs like compute like money, compute, labour, and so on isnât the same as scaling outputs like capabilities or intelligence. So, evidence that inputs have been increasing a lot is not evidence that outputs have been increasing a lot. We should avoid ambiguating between these two things.
Iâm actually not convinced AI can drive a car today in any sense that was not also true 5 years ago or 10 years ago. I have followed the self-driving car industry closely and, internally, companies have a lot of metrics about safety and performance. These are closely held and rarely is anything disclosed to the public.
We also have no idea how much human labour is required in operating autonomous vehicle prototypes, e.g., how often a human has to intervene remotely.
Self-driving car companies are extremely secretive about the information that is the most interesting for judging technological progress. And they simultaneously have strong and aggressive PR and marketing. So, Iâm skeptical. Especially since there is a history of companies like Cruise making aggressive, optimistic pronouncements and then abruptly announcing that the company is over.
Elon Musk has said full autonomy is one year away every year since 2015. Thatâs an extreme case, but others in the self-driving car industry have also set timelines and then blown past them.
Thereâs a big difference between behaviours that, if a human can do them, indicate a high level of human intelligence versus behaviours that we would need to see from a machine to conclude that it has human-level intelligence or something close to it.
For example, if a human can play grandmaster-level chess, that indicates high intelligence. But computers have played grandmaster-level chess since the 1990s. And yet clearly artificial intelligence (AGI) or human-level artificial intelligence (HLAI) has not existed since the 1990s.
The same idea applies to taking exams. Large language models (LLMs) are good at answering written exam questions, but their success on these questions does not indicate they have an equivalent level of intelligence to humans who score similarly on those exams. This is just a fundamental error, akin to saying IBMâs Deep Blue is AGI.
If you look at a test like ARC-AGI-2, frontier AI systems score well below the human average.
On average, it doesnât appear like AI experts do in fact agree that AGI is likely to arrive within 5 or 10 years, although of course some AI experts do think that. One survey of AI experts found their median prediction is a 50% chance of AGI by 2047 (23 years from now) â which is actually compatible with the prediction from Geoffrey Hinton you cited, whoâs thrown out 5 to 20 years with 50% confidence as his prediction.
Another survey found an aggregated prediction that thereâs a 50% chance of AI being capable of automating all human jobs by 2116 (91 years from now). I donât know why those two predictions are so far apart.
(Edit on 2025-05-03 at 09:21 UTC: Oops, those are actually responses to two different questions from the same survey â the 2023 AI Impacts survey â not two different surveys. The difference of 69 years between the two predictions is wacky. I donât know why there is such a huge gap.)
If it seems to you like thereâs a consensus around short-term AGI, that probably has more to do with who youâre asking or who youâre listening to than what people, in general, actually believe. I think a lot of AGI discourse is an echo chamber where people continuously hear their existing views affirmed and re-affirmed and reasonable criticism of these views, even criticism from reputable experts, is often not met warmly.
Many people do not share the intuition that frontier AI systems are particularly smart or useful. I wrote a post here that points out, so far, AI does not seem to have had much of an impact on either firm-level productivity or economic growth, and has achieved only the most limited amount of labour automation.
LLM-based systems have multiple embarrassing failure modes that seem to reveal they are much less intelligent than they might otherwise appear. These failures seem like fundamental problems with LLM-based systems and not something that anyone currently knows how to solve.
What does this have to do with effective altruism?
So, you want to try to lock in AI forecasters to onerous and probably illegal contracts that forbid them from founding an AI startup after leaving the forecasting organization? Who would sign such a contract? This is even worse than only hiring people who are intellectually pre-committed to certain AI forecasts. Because it goes beyond a verbal affirmation of their beliefs to actually attempting to legally force them to comply with the (putative) ethical implications of certain AI forecasts.
If the suggestion is simply promoting âsocial normsâ against starting AI startups, well, that social norm already exists to some extent in this community, as evidenced by the response on the EA Forum. But if the norm is too weak, it wonât prevent the undesired outcome (the creation of an AI startup), and if the norm is too strong, I donât see how it doesnât end up selecting forecasters for intellectual conformity. Because non-conformists would not want to go along with such a norm (just like they wouldnât want to sign a contract telling them what they can and canât do after they leave the forecasting company).
One of the authors responds to the comment you linked to and says he was already aware of the concept of the multiple stages fallacy when writing the paper.
But the point I was making in my comment above is how easy it is for reasonable, informed people to generate different intuitions that form the fundamental inputs of a forecasting model like AI 2027. For example, the authors intuit that something would take years, not decades, to solve. Someone else could easily intuit it will take decades, not years.
The same is true for all the different intuitions the model relies on to get to its thrilling conclusion.
Since the model can only exist by using many such intuitions as inputs, ultimately the model is effectively a re-statement of these intuitions, and putting these intuitions into a model doesnât make them any more correct.
In 2-3 years, when it turns out the prediction of AGI in 2027 is wrong, it probably wonât be because of a math error in the model but rather because the intuitions the model is based on are wrong.
I donât know how Epoch AI can both âhire people with a diversity of viewpoints in order to counter biasâ and ensure that your former employees wonât try to âcash in on the AI boom in an acceleratory wayâ. These seem like incompatible goals.
I think Epoch has to either:
Accept that people have different views and will have different ideas about what actions are ethical, e.g., they may view creating an AI startup focused on automating labour as helpful to the world and benign
or
Only hire people who believe in short AGI timelines and high AGI risk and, as a result, bias its forecasts towards those conclusions
Is there a third option?
Thereâs also a big difference between whatâs technically illegal and what a court would realistically punish a person or an organization for doing, since the courts rely on discernment or, more fittingly, judgment. The latter is much more relevant for deciding whether you should use the word âfraudâ in the title of a post about a charity.
TFD, I think your analysis is correct and incisive. Iâm grateful to you for writing these comments on this post.
It seems clear that if Jaime had different views about the risk-reward of hypothetical 21st century AGI, nobody would be complaining about him loving his family.
Accusing Jaime of âselfishnessâ, even though he used that term himself in (what I interpret to be) a self-deprecating way, seems really unfair and unreasonable, and just excessively mean. As you and Jeff Kaufman pointed out, many people who are accepted into the EA movement have the same or similar views as Jaime on who to prioritize and so on. These criticisms would not be levied against Jaime if he were not an AI risk skeptic.
The social norms of EA or at least the EA Forum are different today than they were ten years ago. Ten years ago, if you said you only care about people who are either alive today or who will be born in the next 100 years, and you donât think much about AGI because global poverty seems a lot more important, then you would be fully qualified to be the president of a university EA group, get a job at a meta-EA organization, or represent the views of the EA movement to a public audience.
Today, it seems like there are a lot more people who self-identify as EAs who see focusing on global poverty as more or less a waste of time relative to the only thing that matters, which is that the Singularity is coming in about 2-5 years (unless we take drastic action), and all our efforts should be focused on making sure the Singularity goes good and not bad â including trying to delay it if that helps. People who disagree with this view have not yet been fully excluded from EA but it seems like some people are pretty mean to people who disagree. (I am one of the people who disagrees.)
As a side note, itâs also strange to me that people are treating the founding of Mechanize as if it has a realistic chance to accelerate AGI progress more than a negligible amount â enough of a chance of enough of an acceleration to be genuinely concerning. AI startups are created all the time. Some of them state wildly ambitious goals, like Mechanize. They typically fail to achieve these goals. The startup Vicarious comes to mind.
There are many startups trying to automate various kinds of physical and non-physical labour. Some larger companies like Tesla and Alphabet are also working on this. Why would Mechanize be particularly concerning or be particularly likely to succeed?
Did you let Sinergia know their website still shows the old, incorrect estimate of 354 instead of the new, updated estimate of 285? What reason do you have to believe that staff at Sinergia have an intent to deceive? Is it possible they forgot to update their website or havenât gotten around to it yet?
5 votes
Overall karma indicates overall quality.
Total points: 2
Agreement karma indicates agreement, separate from overall quality.
Iâm guessing this is probably a response to the post that unfairly accused a charity of fraud? (The post Iâm thinking of currently has â60 karma, 0 agree votes, 6 disagree votes, and 4 top-level comments that are all critical.)
Some criticism might be friendly and constructive enough that giving the organization a chance to write a reply before publishing is not that important. Or if the organization is large, powerful, and has lots of money, like Open Philanthropy, and especially if your criticisms are of a more general or a more philosophical kind, it might not be important to send them a copy before you publish. This depends partly on how influential you are in EA and on how harsh your criticisms are.
Definitely accusing a small charity of fraud is something you should run by the charity beforehand. In that case, though, the charity was already so frustrated with the criticâs poor-quality criticism that they had publicly stated (before the fraud accusation) they didnât want to engage with it anymore.