First: Thank you for posting these thoughts. I have a lot of disagreements, as I explain below, but I appreciate the time you spent to express your concerns and publish them where people could read and respond. That demonstrates courage, as well as genuine care for the EA movement and the people it wants to help. I hope my responses are helpful.
Second: I recommend this below, but Iâll also say it here: If you have questions or uncertainties about something in EA (for example, how EA funders model the potential impact of donations), try asking questions!
On the Forum is good, but you can also write directly to the people who work on projects. Theyâll often respond to you, especially if your question is specific and indicates that youâve done your own research beforehand. And even if they donât respond, your question will indicate the communityâs interest in a topic, and may be one factor that eventually leads them to write a blog/âForum post on the topic.
(For example, it may not be worth several hours of time for an EA Funds manager to write up their models for one person, but they may decide to do so after the tenth person asksâand for all you know, you could be person #10.)
Anyway, here are some thoughts on your individual points:
Samasource has lifted tens of thousands of people out of poverty with a self-sustaining model that, unlike GiveDirectly, is completely unreliant on continual donor funding, providing a tremendous multiplier on top of the funds that were initial used to establish Samasource.
Itâs easy to cherry-pick from among the worldâs tens of thousands of charities and find a few that seem to have better models than GiveWellâs recommendations. The relevant questions are:
Could we have predicted Samasourceâs success ahead of time and helped it scale faster? If so, how? Overall, job/âskills-training programs havenât had much success, and since only GiveWell was doing much charity research when Samasource was young (2008), itâs understandable that theyâd focus on areas that were more promising overall.
Could someone in EA found a program as successful as Samasource? If so, how? A strategy of âtake the best thing you can find and copy itâ doesnât obviously seem stronger than âtake an area that seems promising and try to found an unusually good charity within that areaâ, which people in EA are already doing.
Also, have you heard of Wave? Itâs a for-profit startup co-founded by a member of the EA community, and it has at least a few EA-aligned staffers. They provide cheap remittances to help poor people lift their families out of poverty faster, and as far as I know, they havenât had to take any donations to do so. Thatâs the closest thing to an EA Samasource I can think of.
(If you have ideas for other self-sustaining projects you think could be very impactful, please post about them on the Forum!)
The EA movement originally threw around the idea of earning to give, a concept which was later retracted as a key talking point in favor of theoretically more impactful options. But the fact that a movement oriented around maximizing impact started out with earning to give is worrying. Even if earning to give became popular with hundreds to thousands of people, which in fact ended up happening, the impact on the world would be fairly minimal compared to the impact other actors have.
My model of early EA is that it focused on the following question:
âHow can I, as an individual, help the world as much as possible?â
But that question also had some subtext:
âł...and also, I probably want to do this somewhat reliably, without taking on too much risk.
The first people in EA were more or less alone. There werenât any grants for EA projects. There wasnât a community of thousands of people working in dozens of EA-aligned organizations. There were a few lonely individuals (and one or two groups large enough to meet up at someoneâs house and chat).
Under these circumstances, projects like âfounding the next Samasourceâ seem a lot less safe, and itâs hard to fault early adopters for choosing âsave a couple of lives every year, reliably, while holding down a steady job and building career capital for future movesâ.
(Consider that a good trader at an investment bank could become a C-level executive with tens of millions of dollars at their disposal. The odds of this donât seem much worse than the odds that a random EA-aligned nonprofit founder creates something as good as Samasourceâand they might be better.)
In general, this is a really good thing to remember when you think about the early history of the EA community: for the first few years, there really wasnât much of a âcommunityâ. Even after a few hundred people had joined up, it would have taken a lot of gumption to predict that the movement was going to be capable of changing the world in a grand-strategic sense.
As an example issue, in terms of financial resources, the entire EA community and all of its associated organizations are being outspent and outcompeted by St. Judeâs alone. Earning to give might not resolve the imbalance, but getting a single additional large donor on board might.
There are quite a few people in EA who work full-time on donor relations and donor advisory. As a result of this work, I know of at least three billionaires who have made substantial contributions to EA projects, and there are probably more that I donât know of (not to mention many more donors at lower but still-stratospheric levels of wealth).
Also, earning to give has outcomes beyond âmoney goes to EA charitiesâ. People working at high-paid jobs in prestigious companies can get promoted to executive-level positions, influence corporate giving, influence colleagues, etc.
For example, employees of Google Boston organize a GiveWell fundraiser that brings in hundreds of thousands of dollars each year on top of their normal jobs (Iâd guess this requires a few hundred hours of work at most).
Another example: in his first week on the job, the person who co-founded EA Epic with me walked up to the CEO after her standard speech to new employees and handed her a copy of a Peter Singer book. The next Monday, he got a friendly email from the head of Epicâs corporate giving team, who told him the CEO had enjoyed the book and asked her to get in touch. While his meeting with the corporate giving head didnât lead to any concrete results, the CEO was beginning to work on her foundation this year, and itâs possible that some of her donations may eventually be EA-aligned. Things like that wonât happen unless people in EA put themselves in a position to talk to rich/âpowerful people, and not all of those people use philanthropic advisory firms.
(A lot of good can still be done through philanthropic advisory, of course; my point is that safe earning-to-give jobs still offer opportunities for high-reward risks.)
Perhaps EAs would be fanning out at high net worth advisory offices to do philanthropic advisory instead of working at Jane Street. Perhaps EAs would be working as chiefs of staff for major CEOs to have a chance at changing minds.
Some specific examples of high-net-worth advisory projects from people in EA:
Alex Fosterâs work as a philanthropy advisor at Veddis
This isnât to say that we couldnât have had a greater focus on reaching high-net-worth advisory offices earlier on in the movement, but it didnât take EA very long to move in that direction.
(I would be curious to hear how various people involved in early EA viewed the idea of âtrying to advise rich people in a more formal wayâ.)
Itâs also worth mentioning that 80K does list philanthropic advising as one of their priority paths. My guess is that there arenât many jobs in that area, and that existing jobs may require luck/âconnections to get, but Iâd love to be proven wrong, because Iâve thought for a long time that this is a promising area. (I myself advise a small family foundation on their giving, and itâs been a rewarding experience.)
Perhaps the movement would conduct research on how Warren Buffet decided on the Bill and Melinda Gates Foundation instead of less optimal choices, and whether outreach, networking, or persuasion methods would be effective.
There is some EA research on the psychology of giving (the researchers I know of here are Stefan Schubert and Lucius Caviola), but this is an area I think we could scale if anyone were interested in the subjectâmaybe this is a genuine gap in EA?
Iâd be interested to see you follow up on this specific topic.
There are multitudes of high impact activities that may not require small ultra-curated teams and can involve currently underutilized community members.
Which activities? If you point out an opportunity and make a compelling case for it, thereâs a good chance that youâll attract funding and interested people; this has happened many times already in the brief history of EA. But so far, EA projects that tried to scale quickly with help from people who werenât closely aligned generally havenât done well (as far as I know; I may be forgetting or not know of more successful projects).
As a final example, EA is very weak compared to all of the other forces in the world in all relevant senses of the term: weak in financial resources, weak in number of people, weak in political power.
This is true, but considering that the movement literally started from scratch ten years ago, and is built around some of the least marketable ideas in the world (donât yield to emotion! Give away your money! Read long articles!), it has gained strength at an incredible pace.
Some achievements:
Multiple billionaires are heavily involved.
One of the top X-risk organizations is run by a British lord who has held some of the most influential positions in his country.
GiveDirectly is working on multiple projects with the worldâs largest international aid organizations, which have the potential to sharply increase the impact of billions of dollars in spending.
There are active student effective altruism groups at more than half of the worldâs top 20 universities. Most of these groups are growing and becoming more active over time.
Ten years ago, a nascent GiveWell was finding its footing after an online scandal nearly ended the project, and Giving What We Can was about to launch with 23 members. Weâve come a long way.
Is this rate of growth sufficient? Maybe not. We may not acquire enough influence to stop the next world-rending disaster before it happens. But weâve done remarkably well despite some setbacks, and critique-in-hindsight of EA goals has a high bar to clear in order to show that things could have gone much better.
(As I noted above, though, I think youâre right that we could have paid more attention to certain ideas early on.)
Substantial strategic research and analysis is required to assess the current course of action and evaluate better courses of action. Itâs not clear to me why there has been such limited discussion of this and progress so far unless everyone thinks being financially outmatched by St. Judeâs for the next 5+ years is an optimal course of action that does not require community strategizing to address.
The end of the last sentence has a condescending tone that slightly sours my feelings toward this piece, even though I can appreciate the point youâre trying to make.
Iâm in favor of more strategic discussion, but many of the strategy suggestions Iâve seen on the Forum suffer from at least one of the following:
A lack of specificity (a problem is noted, but no solution is proposed, or a solution is proposed with very little detail /â no modeling of any kind)
A lack of knowledge of the full scope of the present-day movement (itâs easy to reduce EA to consisting of GiveWell, Open Phil, 80K, and CEA, but thereâs a lot more going on than that; I often see people propose ideas that are already being implemented)
âSomeone should do Xâ syndrome (an idea is proposed which could go very well, but then no one ever follows up with a more detailed proposal or a grant application). In theory, EA orgs could pick up these ideas and fund people to work on them, but if your idea doesnât fit the focus of any particular organization, some individual will have to pick it up and run with it.
These suggestions are still frequently useful, and Iâve heard many of them be discussed within EA organizations, but I wish that writers would, on average, move away from abstract worries and criticism and move toward concrete suggestions and proposals.
(By the way, Iâm always happy to read anyoneâs Forum posts ahead of time and make suggestions for ways to make them more concrete, people the author might want to talk to before publishing, etc.)
Samasource, for example, may very be orders of magnitude more effective per dollar of total lifetime donations than GiveDirectly. The longer Samasource runs a financially self-sustaining model, the better the impact per donor dollar will be. But Samasource was not started based on rigorous research. If we pretend it was never started and it sought funding from the EA community today to launch, Samasource may very well have gone unfunded and never have existed, which is a problem if it is actually comparably effective or more effective than GiveDirectly.
Two notes:
1. GiveDirectly isnât just giving money directly to people; it is also changing the aid sector by establishing the idea that aid should clear the âcash benchmarkâ. This has already begun to influence WHO and USAID, as well as many NGOs and private foundations, and the eventual impact of that influence is really hard to calculate (not to mention the value of experimental data on basic income programs, etc.)
2. The apt comparison is not âfunding Samasource vs. funding GiveDirectlyâ. The apt comparison is âfunding the average early-stage Samasource-like thing vs. funding GiveDirectlyâ. Most of the money put into Samasource-like things probably wonât have nearly as much impact as money given directly to poor people. We might hit on some kind of fantastically successful program and get great returns, but that isnât guaranteed or even necessarily likely.
It is also possible that we can work out with reasoning based on Fermi estimates whether organizations have been more effective than top EA charities with reasonable confidence. We can certainly use Fermi estimates to assess the potential impact of ideas, startups, and proposed projects. I expect that a relevant number of these estimates will have a higher expected impact per dollar than top charities.
We will definitely find that some organizations have been more effective than top EA charities, but as Iâve said already, this cherry-picking wonât help us unless we learn general lessons that help us make future funding decisions. Open Phil does some of this already with their History of Philanthropy work.
Thereâs value in using Fermi estimates for potential projects, yes, but why do you think those would help us make better predictions about the world than the models used by GiveWell, Open Phil, EA Funds, etc.? Is there some factor you think these organizations routinely undervalue? Some valuable type of idea they never look at?
I am not aware if funding entities like EA Grants apply explicit quantitative models to estimate EVs and use model outputs for decision making.
Did you write to any funding entities before writing this post to ask about their models?
Generally, these organizations are happy to share at least the basics of their approach, and I think this post would have benefited from having concrete models to comment on (rather than guesses about how Fermi estimates and decision analysis might compare to whatever funders are doing).
It is possible that strategically thinking about career impact is a superior option compared to common courses of action like directly working at an EA organization in operations or earning to give. Careers can have unintuitive but wonderful opportunities for impact.
No EA organization in the world will try to stop you from âstrategically thinking about career impactâ. 80Kâs process explicitly calls on individuals to consider their options carefully, with a lot of self-reflection, before making big decisions. Iâm not sure what you think is missing from the âstandardâ EA career decision process (if such a thing even exists).
Kevin Briggsâ career approach saved many more lives than a typical police officer, and amounted to the same general range of the number of statistical lives that can be saved with global health donations.
Letâs say Iâm choosing between two careers. In Career A, I can save 200 lives before I retire if I manage to perform unusually well, to the point where my career is newsworthy and Iâm hailed as a moral exemplar. In Career B, I can save 200 lives before I retire if I do my job reasonably well, collect paychecks, and donate what I donât need.
The higher-EV option in this scenario is Career B, and it isnât close.
On the other hand, this next example gets closer to proving your point, which is that some careers have much higher potential impact than most ETG opportunities:
The Introduction to Effective Altruism mentions the fantastic actions of Stanislav Petrov, Norman Borlaug, and others that saved a tremendous number of lives, each with a different career.
The point of that section of the introduction isnât to comment on the career choices of Petrov and Borlaug, but to emphasize that even âordinaryâ people can have a tremendous impact; itâs meant to be inspirational, not advisory. (Source: I recently rewrote that section of the introduction.)
Petrovâs heroic actions came about as a result of a very unlikely accident and have little bearing on whether one should become a soldier. Maybe soldiering is worthwhile if you can specifically become an officer at a nuclear facility, but that seems difficult.
Borlaugâs work is a bit more typical of what an impact-focused scientist can achieve, in that at least a few other scientists have also saved millions of lives.
Open Phil agrees with both of us on the potential of science; theyâve given tens of millions of dollars to hundreds of scientists over the last few years. Meanwhile, 80K considers certain branches of science to be priority paths, and the 2017 EA Donor Lottery winner gave most of his winnings to an organization trying to follow in Borlaugâs footsteps.
It may be possible to have a tremendous social impact in a large number of specialties from accounting, to dentistry, to product testing, simply by identifying scalable, sufficiently positive interventions within the field.
I agree! This is one of the reasons Iâm enthusiastic about earning-to-give: if people in EA enter a variety of influential/âwealthy fields and keep their wits about them, they may notice opportunities to create change. On the other hand, studying these professions and trying to change them from the outside seems less promising.
Remember also that problems must be tractable as well as large-scale. Taking your example of âaccountingâ, one could save Americans tens of millions of hours per year by fighting for tax simplification. But in the process, youâd need to:
Develop a strong understanding of tax law and the legislative process.
Raise millions of dollars in lobbying funds and use them effectively to grab attention from congresspeople.
Go head-to-head with Intuit and Grover Norquist, who will be spending their own millions to fight you.
I love tax simplification. Itâs one of my pet causes, something Iâll gripe about or retweet at the slightest opportunity. But I donât think Iâd be likely to have much of an impact throwing my hat into that particular ring, alongside hundreds of other people who have been arguing about it for decades. Iâd rather focus on pulling the rope sideways (fighting for causes and ideas that have high potential and no major enemies).
Fantastic stuff Aaron. Even as someone who has followed EA forum/ânewsletters/âblogs for 2-3 years, there were quite a few things I didnât know about. Thanks!
Along the lines of âEA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the communityâ this is exemplary of the lack of community-level coordination and knowledge management in EA. Efforts like the EA Wiki, which received no substantial support from the community whatsoever, have failed. The entire area has minimal interest and attention allocated to it. The EA Forum lacks some extremely obvious featuresâa registry of everyone in EA, for starters, edit: which was relegated to the volunteer-run and poorly resourced EA Hub for years.
My apologies for the extended delay in response! I appreciate the engagement.
I recommend this below, but Iâll also say it here: If you have questions or uncertainties about something in EA (for example, how EA funders model the potential impact of donations), try asking questions!
Contrary to your assumption, I have a lot of information on EA, and Iâm aware that the problems Iâm pointing out arenât being implemented. There is likely a gap in understanding that is common in written communication.
This communication gap would be less of a problem if the broader issue âEA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the communityâ was addressed. As a specific example, where are the exact criticisms of longtermism or the precise strategies and tactics of specific EA organizations laid out? There should be a publicly available argument map for this, as a rudimentary example of what such a proposed system should look like. Thereâs a severe gap in coordination and collective intelligence software in EA.
Itâs easy to cherry-pick from among the worldâs tens of thousands of charities and find a few that seem to have better models than GiveWellâs recommendations. The relevant questions are:
That was just an example of a charity that would have failed to be funded due to a combination of poor assumptions about the cause area/âintervention type not being effective in 100% of instances and just presumably on average, failure to consider revenue-positive models, etc.
EA misses out an entire classes of strategies that are likely pretty good. Revenue-positive models, for one, for the most part. Thatâs not to say thereâs not a single EA doing it, but thereâs a high lack of interest and support which is just as bad, since the communityâs mission is to advance high-impact efforts.
Could we have predicted Samasourceâs success ahead of time and helped it scale faster? If so, how? Overall, job/âskills-training programs havenât had much success, and since only GiveWell was doing much charity research when Samasource was young (2008), itâs understandable that theyâd focus on areas that were more promising overall.
Thatâs assuming that 100% of jobs/âskills training programs are doomed to failure⊠kind of like assuming 100% of charities are doomed to be ineffective. But if we used that logic, EA wouldnât exist. Doing this analysis on a cause level and intervention type could be fundamentally problematic.
Could someone in EA found a program as successful as Samasource? If so, how? A strategy of âtake the best thing you can find and copy itâ doesnât obviously seem stronger than âtake an area that seems promising and try to found an unusually good charity within that areaâ, which people in EA are already doing.
Yes, EAs could certainly do such a thing, which would be easier if entrepreneurship was more encouraged. With the philosophy of only a few career areas being identified as promising at any given time (they do shift with time, very annoyingly, there should be much less confidence about this) it is hard for people to even pursue this strategy, let alone get funding.
Thereâs a lack of evaluation resources available for assessing new models that donât align with whatâs already been identified, which is a huge problem.
Also, have you heard of Wave? Itâs a for-profit startup co-founded by a member of the EA community, and it has at least a few EA-aligned staffers. They provide cheap remittances to help poor people lift their families out of poverty faster, and as far as I know, they havenât had to take any donations to do so. Thatâs the closest thing to an EA Samasource I can think of.
The existence of something within EA (a minority, for example) does not mean that it is adequately represented.
(If you have ideas for other self-sustaining projects you think could be very impactful, please post about them on the Forum!)
Using a Forum is a terrible mechanism for collective intelligence and evaluation (not to say this issue is unique to EA).
Under these circumstances, projects like âfounding the next Samasourceâ seem a lot less safe, and itâs hard to fault early adopters for choosing âsave a couple of lives every year, reliably, while holding down a steady job and building career capital for future movesâ.
The method by which strategy shifts percolate is rather problematic. A few people at 80K change their minds and the entire community shifts within a few years, causing many people to lose career capital, effort spent specializing in certain fields, etc. This is likely to continue in the future. Ignoring the problem that shifts cause, the current career strategies being advocated are likely not optimal at all and will shift. The fix is to both reduce the cultural reliance on such top-down guidance as well as completely rethink the mechanism by which career strategy changes are made.
There are quite a few people in EA who work full-time on donor relations and donor advisory. As a result of this work, I know of at least three billionaires who have made substantial contributions to EA projects, and there are probably more that I donât know of (not to mention many more donors at lower but still-stratospheric levels of wealth).
Again, the presence of some individuals and teams working on this does not mean itâs the optimal allocation.
Also, earning to give has outcomes beyond âmoney goes to EA charitiesâ. People working at high-paid jobs in prestigious companies can get promoted to executive-level positions, influence corporate giving, influence colleagues, etc.
Those direct and indirect consequences should all be factored into a quantitative model for impact for certain careers, which 80K doesnât do at all, they merely have a few bubbles for the âestimated earningsâ of a job, no public-facing holistic comparison methodology. Surprising given how quantitative the community is.
For example, employees of Google Boston organize a GiveWell fundraiser that brings in hundreds of thousands of dollars each year on top of their normal jobs (Iâd guess this requires a few hundred hours of work at most).
Saying E2E has benefits does not mean itâs the best course of action...
Another example: in his first week on the job, the person who co-founded EA Epic with me walked up to the CEO after her standard speech to new employees and handed her a copy of a Peter Singer book. The next Monday, he got a friendly email from the head of Epicâs corporate giving team, who told him the CEO had enjoyed the book and asked her to get in touch. While his meeting with the corporate giving head didnât lead to any concrete results, the CEO was beginning to work on her foundation this year, and itâs possible that some of her donations may eventually be EA-aligned. Things like that wonât happen unless people in EA put themselves in a position to talk to rich/âpowerful people, and not all of those people use philanthropic advisory firms.
Yep, I mentioned influence in my post, but the question is whether this is the most optimal way to do that (along with all of the other things individual EAs could be doing)
This isnât to say that we couldnât have had a greater focus on reaching high-net-worth advisory offices earlier on in the movement, but it didnât take EA very long to move in that direction.
Again, allocation objection. The fact that these keep springing up is rather remarkable. Are we far from diminishing returns, or past that already? Should all be part of movement strategy considerations done by a team and ideally dedicated resources combined with collective software.
Itâs also worth mentioning that 80K does list philanthropic advising as one of their priority paths. My guess is that there arenât many jobs in that area, and that existing jobs may require luck/âconnections to get, but Iâd love to be proven wrong, because Iâve thought for a long time that this is a promising area. (I myself advise a small family foundation on their giving, and itâs been a rewarding experience.)
Evidently jobs and the organizations that create them can be created if the movement so chooses.
There is some EA research on the psychology of giving (the researchers I know of here are Stefan Schubert and Lucius Caviola), but this is an area I think we could scale if anyone were interested in the subjectâmaybe this is a genuine gap in EA?
Iâd be interested to see you follow up on this specific topic.
Analysis of the various gaps and merits of such could be done, as a terrible example, with some sort of voting mechanism (not quite a prediction market) for various gaps.
Which activities? If you point out an opportunity and make a compelling case for it, thereâs a good chance that youâll attract funding and interested people; this has happened many times already in the brief history of EA. But so far, EA projects that tried to scale quickly with help from people who werenât closely aligned generally havenât done well (as far as I know; I may be forgetting or not know of more successful projects).
I do not think this is true, given observations as well as people generally leaning away from novel ideas, smaller projects, etc.
This is true, but considering that the movement literally started from scratch ten years ago, and is built around some of the least marketable ideas in the world (donât yield to emotion! Give away your money! Read long articles!), it has gained strength at an incredible pace.
I have concerns over movement growth, but anyways, the point was what it could have been, instead of what it is.
(As I noted above, though, I think youâre right that we could have paid more attention to certain ideas early on.)
A problem that continues to this day, a complete lack of investment in exploratory research (charity entrepreneurship has recommended this as an area), new promising ideas and causes popping up all the time that go ignored, unfunded, etc due to many factors including lack of interest in whatever isnât being peddled by key influencers, etc.
A lack of specificity (a problem is noted, but no solution is proposed, or a solution is proposed with very little detail /â no modeling of any kind)
Fix is to have a centralized system storing both potential problems and solutions in a manner anyone can contribute to or vote on. Maybe in 5â10 years the EA Forum will look like this, or maybe not, given the slow (but steady) pace of useful feature development.
A lack of knowledge of the full scope of the present-day movement (itâs easy to reduce EA to consisting of GiveWell, Open Phil, 80K, and CEA, but thereâs a lot more going on than that; I often see people propose ideas that are already being implemented)
This itself is the problem, and years later still no effort to fix collective knowledge and coordination issues.
âSomeone should do Xâ syndrome (an idea is proposed which could go very well, but then no one ever follows up with a more detailed proposal or a grant application). In theory, EA orgs could pick up these ideas and fund people to work on them, but if your idea doesnât fit the focus of any particular organization, some individual will have to pick it up and run with it.
Answer: pay people, but then no interest in doing so. Most funding decisions being made by a very small number of biased evaluators, biased in favor of certain theories of change, causes, larger organizations, existing organizations, interventions, etc. Thus this is the consequence, lots of people agreeing something should be done, no mechanism for this knowledge to become a funding and resourcing decision.
2. The apt comparison is not âfunding Samasource vs. funding GiveDirectlyâ. The apt comparison is âfunding the average early-stage Samasource-like thing vs. funding GiveDirectlyâ. Most of the money put into Samasource-like things probably wonât have nearly as much impact as money given directly to poor people. We might hit on some kind of fantastically successful program and get great returns, but that isnât guaranteed or even necessarily likely.
For all we know revenue-generating poverty alleviation models are vastly better⊠no existing thoughts to date exploring this idea which is one of many thousands of strategic possibilities.
While charity entrepreneurship is doing this at a problem/âsolution level, this isnât sufficient in the slightest, huge assumptions getting made. Sufficient resources exist for this to be done more robustly, no appetite to fund it.
We will definitely find that some organizations have been more effective than top EA charities, but as Iâve said already, this cherry-picking wonât help us unless we learn general lessons that help us make future funding decisions. Open Phil does some of this already with their History of Philanthropy work.
No, cherry picking wasnât the point at all. This is a way to identify potential strategic opportunities getting missed out on (replicable models could succeed), backtest evaluation criteria (like was it truly impossible to identify this would have worked), etc.
Again, the fact that something is happening doesnât mean weâve reached the optimal level of exploration. Weâre probably investing thousands of times less resources than we should.
Did you write to any funding entities before writing this post to ask about their models?
No comment, but regardless, very biased based on certain evaluator opinions, lots of groupthink.
Generally, these organizations are happy to share at least the basics of their approach, and I think this post would have benefited from having concrete models to comment on (rather than guesses about how Fermi estimates and decision analysis might compare to whatever funders are doing).
Hearing about what they say theyâre doing is pretty useless, does not map to whatâs actually happening.
No EA organization in the world will try to stop you from âstrategically thinking about career impactâ. 80Kâs process explicitly calls on individuals to consider their options carefully, with a lot of self-reflection, before making big decisions. Iâm not sure what you think is missing from the âstandardâ EA career decision process (if such a thing even exists).
People just listen to whatever 80K tells them, Holden, etc, rather than independently reaching career decisions. I donât care what 80K tells you do to, they tell people to do a very large number of things, we need to look at whatâs actually happening.
The higher-EV option in this scenario is Career B, and it isnât close.
Youâre missing the point, you are using reasoning about averages. Your average officer likely doesnât save many counterfactual lives. What matters is the EV of your specific strategy, not the overall career. With Petrov, this wasnât predictable, but it couldâve been with that specific officer strategy. Whether the officer in question did that calculation in advance is irrelevant (they probably didnât think about that at all), the fact is that it couldâve been foreseeable.
I agree! This is one of the reasons Iâm enthusiastic about earning-to-give: if people in EA enter a variety of influential/âwealthy fields and keep their wits about them, they may notice opportunities to create change. On the other hand, studying these professions and trying to change them from the outside seems less promising.
Lots of EAs actually have these ideas and theyâre not listed anywhere. Right, outside analysis should be used to foresee these opportunities but not to change fields. No effort getting spent on doing this outside analysis for career areas considered âlow impact.â No awareness of or attempt to evaluate things like dental interventions that are vastly more effective than existing interventions for example. There are so many options, and they could all benefit from EV calculations.
It would require a big shift for EA to start doing these EV calculations for many areas. Again, a lot of work, but the people and money are here.
There needs to be this paradigm shift, followed by resource allocation, probably never gonna happen.
Remember also that problems must be tractable as well as large-scale. Taking your example of âaccountingâ, one could save Americans tens of millions of hours per year by fighting for tax simplification. But in the process, youâd need to:
Tractability calculations should be case by case, not generalizable. And thatâs just one of many possibilities within accounting, none of which have any awareness or EV calculations. But yeah, that one might be hard.
Could we have predicted Samasourceâs success ahead of time and helped it scale faster? If so, how? Overall, job/âskills-training programs havenât had much success, and since only GiveWell was doing much charity research when Samasource was young (2008), itâs understandable that theyâd focus on areas that were more promising overall.
Thatâs assuming that 100% of jobs/âskills training programs are doomed to failure⊠kind of like assuming 100% of charities are doomed to be ineffective.
I strong downvoted all of this personâs recent comments (and strong downvoted my own comment you are reading) because this personâs content is toxic, borderline abusive, and I think downvoting suppresses the content.
This is the worst content I have ever seen on the forum. What makes it bad is how it steals attention by raiding EA norms and existing criticism, smearing this all into a long slur made meaningless with an endless stream of discursion/âpivots/âpadding.
I literally just donât want Aaron G. or someone else to spend time on this (Aaronâs comment was lavish and I think other eyeballs and thoughts will spend time on this).
Separately and additionally, I can justify the downvoting by terrible content/âlack of modesty/âetc.
I am not âfriends with Aaronâ or anything like that. Iâm not virtuous or anything like that.
Iâve upvoted most of Holocronâs comments on this thread. Theyâre not perfect, but I think they deserve a better response than this. (My votes were based on comment quality, not just because I wanted to reverse downvotes.)
Mass-downvoting everything a person writes is against the Forumâs rules. If you disagree this vehemently with someone, explain why (in more detail than âthis personâs content is toxicâ â Iâve read all of their comments and I have no idea what you mean).
Iâm discussing this comment with our moderation team to see if we want to take further action.
Sometimes, when someone believes they perceive ill-intent of a certain character, they will react very strongly, so if they are wrong, this can be very inappropriate.
Thanks Aaron, Iâm happy to see this is an actively enforced community norm. I must admit that not knowing that it was a norm, in I did downvote comments in response, then subsequently propose a monitoring system (or even a system warning against this as itâs being done). Will undo (or feel free to undo on my behalf) downvotes.
If you have feedback on making comments more great, please let me know!
Iâll leave your votes to you â we donât actively remove votes unless the circumstances are extreme (e.g. removing all votes from a sockpuppet account someone used to upvote their own content).
I am very sorry you feel that way. I hope that by starting off my comment with âMy apologies for the extended delay in response! I appreciate the engagementâ Iâm not indicating Iâm âtoxicâ and âborderline abusive.â
Itâs very concerning to see continual inaction on these matters since I care about the future of this movement and the world, so when bringing up long unaddressed problems (which you seem to be implicitly recognizing as somewhat valid) I donât think itâs unreasonable to take a more critical tone. Iâm fairly confident I can find a lot of content that is written in much more severe a manner on this forum and most certainly on LessWrong.
How exactly am I âraiding EA norms?â In my communication style and terminology? Doesnât seem like a problem to me even if that was the case.
I literally just donât want Aaron G. or someone else to spend time on this (Aaronâs comment was lavish and I think other eyeballs and thoughts will spend time on this).
You wish for people to ignore legitimate feedback, and to suppress it with downvoting? That doesnât sound like a way for a movement to improve. I do appreciate Aaronâs engagement. While I think he may have misunderstood certain points I was trying to make, his information was legitimately helpful, nonetheless as indicated by the other commenter on this thread.
While this may not get much engagement, as you seem to recognize, there are very legitimate issues here, and a complete lack of action on multiple fronts. My views most certainly mirror those in the community, even recent EA forum posts alluding at similar ideas.
I think that pointing out legitimate areas of potential improvement should be valued in communities, and it should be acceptable to take somewhat critical tones as long as the intent is to not cause any emotional harm.
Unfortunately I donât have an unlimited amount of time every day to refine my tone and write detailed writeups on the lack of progress happening on several key fronts, given my low confidence that this is sufficient to induce change, as evidenced by all of the âexisting criticismâ that you are talking about.
Following up on my earlier comment: Because of the clear violation of the Forumâs rules, weâre issuing a two-week ban to Charles, starting today.
While he expressed an intent to step away from the Forum for a while, and I appreciate his walking back the original comment, we donât want a voluntary self-defined commitment to preempt an actual ban in cases like this.
I firmly believe that such âsuppression,â especially if done unilaterally by a single person, is exceptionally likely to be harmful. I strongly condemn such actions.
Furthermore, this is an ineffective strategy given that I can simply (and probably should) write up additional top-level posts that contain other informed views on EA.
(Warning: Long comment ahead.)
First: Thank you for posting these thoughts. I have a lot of disagreements, as I explain below, but I appreciate the time you spent to express your concerns and publish them where people could read and respond. That demonstrates courage, as well as genuine care for the EA movement and the people it wants to help. I hope my responses are helpful.
Second: I recommend this below, but Iâll also say it here: If you have questions or uncertainties about something in EA (for example, how EA funders model the potential impact of donations), try asking questions!
On the Forum is good, but you can also write directly to the people who work on projects. Theyâll often respond to you, especially if your question is specific and indicates that youâve done your own research beforehand. And even if they donât respond, your question will indicate the communityâs interest in a topic, and may be one factor that eventually leads them to write a blog/âForum post on the topic.
(For example, it may not be worth several hours of time for an EA Funds manager to write up their models for one person, but they may decide to do so after the tenth person asksâand for all you know, you could be person #10.)
Anyway, here are some thoughts on your individual points:
Itâs easy to cherry-pick from among the worldâs tens of thousands of charities and find a few that seem to have better models than GiveWellâs recommendations. The relevant questions are:
Could we have predicted Samasourceâs success ahead of time and helped it scale faster? If so, how? Overall, job/âskills-training programs havenât had much success, and since only GiveWell was doing much charity research when Samasource was young (2008), itâs understandable that theyâd focus on areas that were more promising overall.
Could someone in EA found a program as successful as Samasource? If so, how? A strategy of âtake the best thing you can find and copy itâ doesnât obviously seem stronger than âtake an area that seems promising and try to found an unusually good charity within that areaâ, which people in EA are already doing.
Also, have you heard of Wave? Itâs a for-profit startup co-founded by a member of the EA community, and it has at least a few EA-aligned staffers. They provide cheap remittances to help poor people lift their families out of poverty faster, and as far as I know, they havenât had to take any donations to do so. Thatâs the closest thing to an EA Samasource I can think of.
(If you have ideas for other self-sustaining projects you think could be very impactful, please post about them on the Forum!)
My model of early EA is that it focused on the following question:
âHow can I, as an individual, help the world as much as possible?â
But that question also had some subtext:
âł...and also, I probably want to do this somewhat reliably, without taking on too much risk.
The first people in EA were more or less alone. There werenât any grants for EA projects. There wasnât a community of thousands of people working in dozens of EA-aligned organizations. There were a few lonely individuals (and one or two groups large enough to meet up at someoneâs house and chat).
Under these circumstances, projects like âfounding the next Samasourceâ seem a lot less safe, and itâs hard to fault early adopters for choosing âsave a couple of lives every year, reliably, while holding down a steady job and building career capital for future movesâ.
(Consider that a good trader at an investment bank could become a C-level executive with tens of millions of dollars at their disposal. The odds of this donât seem much worse than the odds that a random EA-aligned nonprofit founder creates something as good as Samasourceâand they might be better.)
In general, this is a really good thing to remember when you think about the early history of the EA community: for the first few years, there really wasnât much of a âcommunityâ. Even after a few hundred people had joined up, it would have taken a lot of gumption to predict that the movement was going to be capable of changing the world in a grand-strategic sense.
There are quite a few people in EA who work full-time on donor relations and donor advisory. As a result of this work, I know of at least three billionaires who have made substantial contributions to EA projects, and there are probably more that I donât know of (not to mention many more donors at lower but still-stratospheric levels of wealth).
Also, earning to give has outcomes beyond âmoney goes to EA charitiesâ. People working at high-paid jobs in prestigious companies can get promoted to executive-level positions, influence corporate giving, influence colleagues, etc.
For example, employees of Google Boston organize a GiveWell fundraiser that brings in hundreds of thousands of dollars each year on top of their normal jobs (Iâd guess this requires a few hundred hours of work at most).
Another example: in his first week on the job, the person who co-founded EA Epic with me walked up to the CEO after her standard speech to new employees and handed her a copy of a Peter Singer book. The next Monday, he got a friendly email from the head of Epicâs corporate giving team, who told him the CEO had enjoyed the book and asked her to get in touch. While his meeting with the corporate giving head didnât lead to any concrete results, the CEO was beginning to work on her foundation this year, and itâs possible that some of her donations may eventually be EA-aligned. Things like that wonât happen unless people in EA put themselves in a position to talk to rich/âpowerful people, and not all of those people use philanthropic advisory firms.
(A lot of good can still be done through philanthropic advisory, of course; my point is that safe earning-to-give jobs still offer opportunities for high-reward risks.)
Some specific examples of high-net-worth advisory projects from people in EA:
Effective Giving (works in the UK and the Netherlands)
Harvard EAâs Philanthropy Advisory Fellowship
Good Ventures
Raising for Effective Giving
Alex Fosterâs work as a philanthropy advisor at Veddis
This isnât to say that we couldnât have had a greater focus on reaching high-net-worth advisory offices earlier on in the movement, but it didnât take EA very long to move in that direction.
(I would be curious to hear how various people involved in early EA viewed the idea of âtrying to advise rich people in a more formal wayâ.)
Itâs also worth mentioning that 80K does list philanthropic advising as one of their priority paths. My guess is that there arenât many jobs in that area, and that existing jobs may require luck/âconnections to get, but Iâd love to be proven wrong, because Iâve thought for a long time that this is a promising area. (I myself advise a small family foundation on their giving, and itâs been a rewarding experience.)
There is some EA research on the psychology of giving (the researchers I know of here are Stefan Schubert and Lucius Caviola), but this is an area I think we could scale if anyone were interested in the subjectâmaybe this is a genuine gap in EA?
Iâd be interested to see you follow up on this specific topic.
Which activities? If you point out an opportunity and make a compelling case for it, thereâs a good chance that youâll attract funding and interested people; this has happened many times already in the brief history of EA. But so far, EA projects that tried to scale quickly with help from people who werenât closely aligned generally havenât done well (as far as I know; I may be forgetting or not know of more successful projects).
This is true, but considering that the movement literally started from scratch ten years ago, and is built around some of the least marketable ideas in the world (donât yield to emotion! Give away your money! Read long articles!), it has gained strength at an incredible pace.
Some achievements:
Multiple billionaires are heavily involved.
One of the top X-risk organizations is run by a British lord who has held some of the most influential positions in his country.
GiveDirectly is working on multiple projects with the worldâs largest international aid organizations, which have the potential to sharply increase the impact of billions of dollars in spending.
There are active student effective altruism groups at more than half of the worldâs top 20 universities. Most of these groups are growing and becoming more active over time.
One of the most popular media sources for the Western liberal elite has an entire section devoted to effective altruism, whose top journalist is someone who didnât have much (any?) prior journalistic experience but did run the most popular EA Tumblr.
The former head of IARPA runs an AI risk think tank in Washington.
Ten years ago, a nascent GiveWell was finding its footing after an online scandal nearly ended the project, and Giving What We Can was about to launch with 23 members. Weâve come a long way.
Is this rate of growth sufficient? Maybe not. We may not acquire enough influence to stop the next world-rending disaster before it happens. But weâve done remarkably well despite some setbacks, and critique-in-hindsight of EA goals has a high bar to clear in order to show that things could have gone much better.
(As I noted above, though, I think youâre right that we could have paid more attention to certain ideas early on.)
The end of the last sentence has a condescending tone that slightly sours my feelings toward this piece, even though I can appreciate the point youâre trying to make.
Iâm in favor of more strategic discussion, but many of the strategy suggestions Iâve seen on the Forum suffer from at least one of the following:
A lack of specificity (a problem is noted, but no solution is proposed, or a solution is proposed with very little detail /â no modeling of any kind)
A lack of knowledge of the full scope of the present-day movement (itâs easy to reduce EA to consisting of GiveWell, Open Phil, 80K, and CEA, but thereâs a lot more going on than that; I often see people propose ideas that are already being implemented)
âSomeone should do Xâ syndrome (an idea is proposed which could go very well, but then no one ever follows up with a more detailed proposal or a grant application). In theory, EA orgs could pick up these ideas and fund people to work on them, but if your idea doesnât fit the focus of any particular organization, some individual will have to pick it up and run with it.
These suggestions are still frequently useful, and Iâve heard many of them be discussed within EA organizations, but I wish that writers would, on average, move away from abstract worries and criticism and move toward concrete suggestions and proposals.
(By the way, Iâm always happy to read anyoneâs Forum posts ahead of time and make suggestions for ways to make them more concrete, people the author might want to talk to before publishing, etc.)
Two notes:
1. GiveDirectly isnât just giving money directly to people; it is also changing the aid sector by establishing the idea that aid should clear the âcash benchmarkâ. This has already begun to influence WHO and USAID, as well as many NGOs and private foundations, and the eventual impact of that influence is really hard to calculate (not to mention the value of experimental data on basic income programs, etc.)
2. The apt comparison is not âfunding Samasource vs. funding GiveDirectlyâ. The apt comparison is âfunding the average early-stage Samasource-like thing vs. funding GiveDirectlyâ. Most of the money put into Samasource-like things probably wonât have nearly as much impact as money given directly to poor people. We might hit on some kind of fantastically successful program and get great returns, but that isnât guaranteed or even necessarily likely.
We will definitely find that some organizations have been more effective than top EA charities, but as Iâve said already, this cherry-picking wonât help us unless we learn general lessons that help us make future funding decisions. Open Phil does some of this already with their History of Philanthropy work.
Thereâs value in using Fermi estimates for potential projects, yes, but why do you think those would help us make better predictions about the world than the models used by GiveWell, Open Phil, EA Funds, etc.? Is there some factor you think these organizations routinely undervalue? Some valuable type of idea they never look at?
(Also, EA funding goes well beyond âtop charitiesâ at this point: GiveWellâs research is expanding to cover a lot more ground, and the latest grant recommendations from the Long-Term Future Fund included a lot of experimental research and ideas.)
Did you write to any funding entities before writing this post to ask about their models?
Generally, these organizations are happy to share at least the basics of their approach, and I think this post would have benefited from having concrete models to comment on (rather than guesses about how Fermi estimates and decision analysis might compare to whatever funders are doing).
No EA organization in the world will try to stop you from âstrategically thinking about career impactâ. 80Kâs process explicitly calls on individuals to consider their options carefully, with a lot of self-reflection, before making big decisions. Iâm not sure what you think is missing from the âstandardâ EA career decision process (if such a thing even exists).
Letâs say Iâm choosing between two careers. In Career A, I can save 200 lives before I retire if I manage to perform unusually well, to the point where my career is newsworthy and Iâm hailed as a moral exemplar. In Career B, I can save 200 lives before I retire if I do my job reasonably well, collect paychecks, and donate what I donât need.
The higher-EV option in this scenario is Career B, and it isnât close.
On the other hand, this next example gets closer to proving your point, which is that some careers have much higher potential impact than most ETG opportunities:
The point of that section of the introduction isnât to comment on the career choices of Petrov and Borlaug, but to emphasize that even âordinaryâ people can have a tremendous impact; itâs meant to be inspirational, not advisory. (Source: I recently rewrote that section of the introduction.)
Petrovâs heroic actions came about as a result of a very unlikely accident and have little bearing on whether one should become a soldier. Maybe soldiering is worthwhile if you can specifically become an officer at a nuclear facility, but that seems difficult.
Borlaugâs work is a bit more typical of what an impact-focused scientist can achieve, in that at least a few other scientists have also saved millions of lives.
Open Phil agrees with both of us on the potential of science; theyâve given tens of millions of dollars to hundreds of scientists over the last few years. Meanwhile, 80K considers certain branches of science to be priority paths, and the 2017 EA Donor Lottery winner gave most of his winnings to an organization trying to follow in Borlaugâs footsteps.
I agree! This is one of the reasons Iâm enthusiastic about earning-to-give: if people in EA enter a variety of influential/âwealthy fields and keep their wits about them, they may notice opportunities to create change. On the other hand, studying these professions and trying to change them from the outside seems less promising.
Remember also that problems must be tractable as well as large-scale. Taking your example of âaccountingâ, one could save Americans tens of millions of hours per year by fighting for tax simplification. But in the process, youâd need to:
Develop a strong understanding of tax law and the legislative process.
Raise millions of dollars in lobbying funds and use them effectively to grab attention from congresspeople.
Go head-to-head with Intuit and Grover Norquist, who will be spending their own millions to fight you.
I love tax simplification. Itâs one of my pet causes, something Iâll gripe about or retweet at the slightest opportunity. But I donât think Iâd be likely to have much of an impact throwing my hat into that particular ring, alongside hundreds of other people who have been arguing about it for decades. Iâd rather focus on pulling the rope sideways (fighting for causes and ideas that have high potential and no major enemies).
Fantastic stuff Aaron. Even as someone who has followed EA forum/ânewsletters/âblogs for 2-3 years, there were quite a few things I didnât know about. Thanks!
Along the lines of âEA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the communityâ this is exemplary of the lack of community-level coordination and knowledge management in EA. Efforts like the EA Wiki, which received no substantial support from the community whatsoever, have failed. The entire area has minimal interest and attention allocated to it. The EA Forum lacks some extremely obvious featuresâa registry of everyone in EA, for starters, edit: which was relegated to the volunteer-run and poorly resourced EA Hub for years.
My apologies for the extended delay in response! I appreciate the engagement.
Contrary to your assumption, I have a lot of information on EA, and Iâm aware that the problems Iâm pointing out arenât being implemented. There is likely a gap in understanding that is common in written communication.
This communication gap would be less of a problem if the broader issue âEA lacks a system to suggest, discuss, and evaluate improvements to EA community strategy and recommendations issued to the communityâ was addressed. As a specific example, where are the exact criticisms of longtermism or the precise strategies and tactics of specific EA organizations laid out? There should be a publicly available argument map for this, as a rudimentary example of what such a proposed system should look like. Thereâs a severe gap in coordination and collective intelligence software in EA.
That was just an example of a charity that would have failed to be funded due to a combination of poor assumptions about the cause area/âintervention type not being effective in 100% of instances and just presumably on average, failure to consider revenue-positive models, etc.
EA misses out an entire classes of strategies that are likely pretty good. Revenue-positive models, for one, for the most part. Thatâs not to say thereâs not a single EA doing it, but thereâs a high lack of interest and support which is just as bad, since the communityâs mission is to advance high-impact efforts.
Thatâs assuming that 100% of jobs/âskills training programs are doomed to failure⊠kind of like assuming 100% of charities are doomed to be ineffective. But if we used that logic, EA wouldnât exist. Doing this analysis on a cause level and intervention type could be fundamentally problematic.
Yes, EAs could certainly do such a thing, which would be easier if entrepreneurship was more encouraged. With the philosophy of only a few career areas being identified as promising at any given time (they do shift with time, very annoyingly, there should be much less confidence about this) it is hard for people to even pursue this strategy, let alone get funding.
Thereâs a lack of evaluation resources available for assessing new models that donât align with whatâs already been identified, which is a huge problem.
The existence of something within EA (a minority, for example) does not mean that it is adequately represented.
Using a Forum is a terrible mechanism for collective intelligence and evaluation (not to say this issue is unique to EA).
The method by which strategy shifts percolate is rather problematic. A few people at 80K change their minds and the entire community shifts within a few years, causing many people to lose career capital, effort spent specializing in certain fields, etc. This is likely to continue in the future. Ignoring the problem that shifts cause, the current career strategies being advocated are likely not optimal at all and will shift. The fix is to both reduce the cultural reliance on such top-down guidance as well as completely rethink the mechanism by which career strategy changes are made.
Again, the presence of some individuals and teams working on this does not mean itâs the optimal allocation.
Those direct and indirect consequences should all be factored into a quantitative model for impact for certain careers, which 80K doesnât do at all, they merely have a few bubbles for the âestimated earningsâ of a job, no public-facing holistic comparison methodology. Surprising given how quantitative the community is.
Yep, I mentioned influence in my post, but the question is whether this is the most optimal way to do that (along with all of the other things individual EAs could be doing)
Again, allocation objection. The fact that these keep springing up is rather remarkable. Are we far from diminishing returns, or past that already? Should all be part of movement strategy considerations done by a team and ideally dedicated resources combined with collective software.
Evidently jobs and the organizations that create them can be created if the movement so chooses.
Analysis of the various gaps and merits of such could be done, as a terrible example, with some sort of voting mechanism (not quite a prediction market) for various gaps.
I do not think this is true, given observations as well as people generally leaning away from novel ideas, smaller projects, etc.
I have concerns over movement growth, but anyways, the point was what it could have been, instead of what it is.
A problem that continues to this day, a complete lack of investment in exploratory research (charity entrepreneurship has recommended this as an area), new promising ideas and causes popping up all the time that go ignored, unfunded, etc due to many factors including lack of interest in whatever isnât being peddled by key influencers, etc.
Fix is to have a centralized system storing both potential problems and solutions in a manner anyone can contribute to or vote on. Maybe in 5â10 years the EA Forum will look like this, or maybe not, given the slow (but steady) pace of useful feature development.
This itself is the problem, and years later still no effort to fix collective knowledge and coordination issues.
Answer: pay people, but then no interest in doing so. Most funding decisions being made by a very small number of biased evaluators, biased in favor of certain theories of change, causes, larger organizations, existing organizations, interventions, etc. Thus this is the consequence, lots of people agreeing something should be done, no mechanism for this knowledge to become a funding and resourcing decision.
For all we know revenue-generating poverty alleviation models are vastly better⊠no existing thoughts to date exploring this idea which is one of many thousands of strategic possibilities.
While charity entrepreneurship is doing this at a problem/âsolution level, this isnât sufficient in the slightest, huge assumptions getting made. Sufficient resources exist for this to be done more robustly, no appetite to fund it.
No, cherry picking wasnât the point at all. This is a way to identify potential strategic opportunities getting missed out on (replicable models could succeed), backtest evaluation criteria (like was it truly impossible to identify this would have worked), etc.
Again, the fact that something is happening doesnât mean weâve reached the optimal level of exploration. Weâre probably investing thousands of times less resources than we should.
No comment, but regardless, very biased based on certain evaluator opinions, lots of groupthink.
Hearing about what they say theyâre doing is pretty useless, does not map to whatâs actually happening.
People just listen to whatever 80K tells them, Holden, etc, rather than independently reaching career decisions. I donât care what 80K tells you do to, they tell people to do a very large number of things, we need to look at whatâs actually happening.
Youâre missing the point, you are using reasoning about averages. Your average officer likely doesnât save many counterfactual lives. What matters is the EV of your specific strategy, not the overall career. With Petrov, this wasnât predictable, but it couldâve been with that specific officer strategy. Whether the officer in question did that calculation in advance is irrelevant (they probably didnât think about that at all), the fact is that it couldâve been foreseeable.
Lots of EAs actually have these ideas and theyâre not listed anywhere. Right, outside analysis should be used to foresee these opportunities but not to change fields. No effort getting spent on doing this outside analysis for career areas considered âlow impact.â No awareness of or attempt to evaluate things like dental interventions that are vastly more effective than existing interventions for example. There are so many options, and they could all benefit from EV calculations.
It would require a big shift for EA to start doing these EV calculations for many areas. Again, a lot of work, but the people and money are here.
There needs to be this paradigm shift, followed by resource allocation, probably never gonna happen.
Tractability calculations should be case by case, not generalizable. And thatâs just one of many possibilities within accounting, none of which have any awareness or EV calculations. But yeah, that one might be hard.
There are many probabilities between 0 and 1.
I strong downvoted all of this personâs recent comments (and strong downvoted my own comment you are reading) because this personâs content is toxic, borderline abusive, and I think downvoting suppresses the content.
This is the worst content I have ever seen on the forum. What makes it bad is how it steals attention by raiding EA norms and existing criticism, smearing this all into a long slur made meaningless with an endless stream of discursion/âpivots/âpadding.
I literally just donât want Aaron G. or someone else to spend time on this (Aaronâs comment was lavish and I think other eyeballs and thoughts will spend time on this).
Separately and additionally, I can justify the downvoting by terrible content/âlack of modesty/âetc.
I am not âfriends with Aaronâ or anything like that. Iâm not virtuous or anything like that.
Iâve upvoted most of Holocronâs comments on this thread. Theyâre not perfect, but I think they deserve a better response than this. (My votes were based on comment quality, not just because I wanted to reverse downvotes.)
Mass-downvoting everything a person writes is against the Forumâs rules. If you disagree this vehemently with someone, explain why (in more detail than âthis personâs content is toxicâ â Iâve read all of their comments and I have no idea what you mean).
Iâm discussing this comment with our moderation team to see if we want to take further action.
My behavior above was a major mistake.
Sometimes, when someone believes they perceive ill-intent of a certain character, they will react very strongly, so if they are wrong, this can be very inappropriate.
I wonât be on the forum for a while.
Thank you for your generous efforts.
Thanks Aaron, Iâm happy to see this is an actively enforced community norm. I must admit that not knowing that it was a norm, in I did downvote comments in response, then subsequently propose a monitoring system (or even a system warning against this as itâs being done). Will undo (or feel free to undo on my behalf) downvotes.
If you have feedback on making comments more great, please let me know!
Iâll leave your votes to you â we donât actively remove votes unless the circumstances are extreme (e.g. removing all votes from a sockpuppet account someone used to upvote their own content).
Hi Charles,
I am very sorry you feel that way. I hope that by starting off my comment with âMy apologies for the extended delay in response! I appreciate the engagementâ Iâm not indicating Iâm âtoxicâ and âborderline abusive.â
Itâs very concerning to see continual inaction on these matters since I care about the future of this movement and the world, so when bringing up long unaddressed problems (which you seem to be implicitly recognizing as somewhat valid) I donât think itâs unreasonable to take a more critical tone. Iâm fairly confident I can find a lot of content that is written in much more severe a manner on this forum and most certainly on LessWrong.
How exactly am I âraiding EA norms?â In my communication style and terminology? Doesnât seem like a problem to me even if that was the case.
You wish for people to ignore legitimate feedback, and to suppress it with downvoting? That doesnât sound like a way for a movement to improve. I do appreciate Aaronâs engagement. While I think he may have misunderstood certain points I was trying to make, his information was legitimately helpful, nonetheless as indicated by the other commenter on this thread.
While this may not get much engagement, as you seem to recognize, there are very legitimate issues here, and a complete lack of action on multiple fronts. My views most certainly mirror those in the community, even recent EA forum posts alluding at similar ideas.
I think that pointing out legitimate areas of potential improvement should be valued in communities, and it should be acceptable to take somewhat critical tones as long as the intent is to not cause any emotional harm.
Unfortunately I donât have an unlimited amount of time every day to refine my tone and write detailed writeups on the lack of progress happening on several key fronts, given my low confidence that this is sufficient to induce change, as evidenced by all of the âexisting criticismâ that you are talking about.
Following up on my earlier comment: Because of the clear violation of the Forumâs rules, weâre issuing a two-week ban to Charles, starting today.
While he expressed an intent to step away from the Forum for a while, and I appreciate his walking back the original comment, we donât want a voluntary self-defined commitment to preempt an actual ban in cases like this.
I firmly believe that such âsuppression,â especially if done unilaterally by a single person, is exceptionally likely to be harmful. I strongly condemn such actions.
Furthermore, this is an ineffective strategy given that I can simply (and probably should) write up additional top-level posts that contain other informed views on EA.