I’m currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
Ozzie Gooen
Quick thoughts:
I appreciate the write up and transparency.
I’m a big fan of engineering work. At the same time, I realize it’s expensive, and it seems like we don’t have much money to work with these days. I think this makes it tricky to find situations where it’s clearly a good fit with the existing donors.
Bigger-picture, I imagine many readers here would have little idea of what “new engineering work” would really look like. It’s tough to do a lot with a tiny team, as you point out. I could imagine some features helping the forum, but would also expect many changes to be experimental.
“Everyone going to the Reddit thread, at once” seems doomed to me, as you point out. But I’d feel better about gradual things. Maybe we could have someone try moderating Reddit for a few months, and see if we can make it any better first. “Transitioning the EA Forum” could come very late, only if we’re able to show good success on a smaller scale.
That said, I’m skeptical of Reddit as a primary forum. I don’t know of other smart Academic-aligned groups who have really made it official infrastructure for them. It seems to me like Reddits are often branches of the overall Reddit community, which is quite separate from the EA community, so it will be difficult to find the slice that we want. I feel better about other paid Forum providers, if we go the route of shutting down the EA Forum.
I think that the EA Discords/Slacks could use more support. Perhaps we shouldn’t try to have “One True Platform”, but have a variety of platforms that work with different sets of people.
As I think about it, I think it’s quite possible that many of the obvious technical improvements for the EA Forum, at this point, won’t translate nicely to user growth. It’s just very hard to make user growth happen, especially after a few years of tech improvements.
I think the EA Forum has major problems with scaling, and that this is a hard tech problem. It’s hard to cleanly split the community into sub-communities (I know there’s been some attempts here). So right now I think we have the issue that we can only have one internet community (to some extent), and this scares a bunch of people away.
Personally, what feels most missing to me around EA online is leadership/communication about the big issues, some smart+effective moderation (this is really tough), and experimentation on online infrastructure outside the EA Forum (see Discords, online courses, online meetups, maybe new online platforms, etc). I think there’s a lot of work to do here, but would flag that it’s likely pretty hit-or-miss, maybe making it a more difficult ask for funders.
Anyway, this was just my quick take. Your team obviously has a lot more context.
I’m overall appreciative to the team and to the funders who have supported the team this long.
I went back-and-forth on this topic with Claude. I was hoping that it would re-derive my points, but getting it to provide decent criticism took a bit more time than I was expecting.
That said, I think with a few prompts (like asking it what it thought of those specific points), it was able to be useful.
https://claude.ai/share/00cbbfad-6d97-4ad8-9831-5af231d36912
Happy to see genuine attempts at this area.
We’re seeking feedback on our cost-effectiveness model and scaling plan
The cost-effectiveness you mentioned is incredibly strong, which made me suspicious. “$5 per income doubling” is high, indeed.
I’ve worked in software for most of my professional life. Going through this more, I’m fairly skeptical of the inputs to your model.Good web applications are a ton of work, even if they’re reusing AI in some way. I have a hard time picturing how much you could really get for $2M, in most settings. (Unless perhaps the founding team is working for a huge pay gap or something, but then this would change the effective cost-effectiveness).
I don’t see much discussion of marketing/distribution expenses. I’d expect these to be high.
The AI space is rapidly changing. This model takes advantage of recent developments, but doesn’t seem to assume there will be huge changes in the future. If there are, the math changes a lot. I’d expect a mix of [better competition], [the tool quickly becoming obsolete], and [the employment landscape changes so much that the income doubling becomes suspicious].
You mention the results of academic studies, but my impression is that you don’t yet have scientific results of people using your specific app. I’d be very skeptical for how much you can generalize the studies. I’d naively expect it would be difficult to motivate users to actually spend much time on the app.
In the startup world, business models in the very early stages of development are treated with tremendous suspicion. I think we have incredibly large uncertainty bounds (with lots more probability of failure), until we see some more serious use.
Overall, this write-up reminds me of a lot of what I hear by early entrepreneurs. I like the enthusiasm, but think that it’s a fair bit overoptimistic.
All that said, it’s still very possible it’s still a good opportunity. Often in the early stages, people would expect a lot of experimentation and change to the specific product.
This seems great to me, kudos for organizing. I’m sure a bunch of people will be interested to see the outcome of this.
If it’s successful, I imagine it might be able to be scaled.
Similar to “Greenwashing” and “Safetywashing”, I’ve been thinking about “Intellectual Washing.”
The pattern works as: “Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views.”
This is easiest to see in sides that you disagree with.
For example, MAGA gets intellectual cred from “The dark enlightenment” / Curtis Yarvin / Peter Thiel / etc. But I’m sure Trump never listened to any of these people, and was likely barely influenced by them. [1]
Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn’t agree with this. And I’d expect Hitler engaged very little with Heidegger’s ideas.
There’s a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.
The dynamics that enable this include:
- The difficulty of making a living or gaining attention as a serious thinker
- Public resource/interest constraints around complex topics
- The ready opportunity to be used as a simple token of support for pre-existing agendas
Note: There’s a long list of types of “X-washing.” There’s an interesting discussion to the best terminology for this are, but I suspect most readers won’t find that particularly interesting. One related concept is that of “selling out”, sometimes where an artist with street cred would pair up with a large brand/label or similar.
[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.
I assumed it’s been mostly dead for a while (haven’t heard about it for a few months). I’m very supportive of it, would like to see it (and more) do well.
Yea, this seems like a remarkably basic defense for the title “People Barely Care About Relative Income”. I want to expect more from economists like this.
I tried asking Claude to come up with a list of arguments on both sides of this. Then I asked it to come up with its final take. I thought that this kind of analysis was far more reasonable than what Caplan did.
(Obviously, this was a very basic job. A more thorough one would probably look like asking an LLM to do some amount of background research and a large amount of brainstorming and then summarize that.)https://claude.ai/share/5e2f1332-095e-4858-b960-55fc566b61ee
There’s some relevant discussion here:
https://forum.effectivealtruism.org/posts/TG2zCDCozMcDLgoJ5/metaculus-q4-ai-benchmarking-bots-are-closing-the-gap?commentId=TvwwuKB6rNASzMNoo
Basically, it seems like people haven’t outperformed the Metaculus template bot much, which IMO is fairly underwhelming, but it is what it is.
You can do simple tips though like run it a few times and average the results.
I’m sure some people are using custom AI tools for polymarket, but I don’t expect that to be very public.
I was focusing on Metaculus/Manifold, where I don’t think there’s much AI bot engagement yet. (Metaculus does have a dedicated tournament, but that’s separate from the main part we see, I believe).
There’s been some neat work on making AI agent forecasters. Some of these seem to have pretty decent levels of accuracy, vs. certain sets of humans.
And yet, very little of this seems to be used in the wild, from what I can tell.
It’s one thing to show some promising results in a limited study. But ultimately, we want these tools to be used by real people.
I assume some obvious todos would be:
1. Websites where you can easily ask one or multiple AI forecasters questions.
2. Competing services that package “AI forecasting” tools in different ways, focusing on optimizing (positive) engagement.
3. I assume that many AI forecasters should really be racking up good scores in Metaculus/Manifold now. The limitation seems to mainly be effort—neither platform has significant incentives yet.
Optimizing AI forecasting bots, but only in experimental settings, seems akin to optimizing cameras, but only in experimental settings. I’d expect you’d wind up with things that are technically impressive but highly unusable. We might learn a lot about a few technical challenges, but little about what real use would look like or what the key bottlenecks will be.
Thanks for clarifying your take!
I’m sorry to hear about those experiences.
Most of the problems you mention seem to be about the specific current EA community, as opposed to the main values of “doing a lot of good” and “being smart about doing so.”
Personally, I’m excited for certain altruistic and smart people to leave the EA community, as it suits them, and do good work elsewhere. I’m sure that being part of the community is limiting to certain people, especially if they can find other great communities.
That said, I of course hope you can find ways for the key values of “doing good in the world” and similar to work for you.
I feel like EAs might be sleeping a bit on digital meetups/conferences.
My impression is that many people prefer in-person events to online ones. But at the same time, a lot of people hate needing to be in the Bay Area / London or having to travel to events.
There was one EAG online during the pandemic (I believe the others were EAGxs), and I had a pretty good experience there. Some downsides, but some strong upsides. It seemed very promising to me.
I’m particularly excited about VR. I have a Quest3, and have been impressed by the experience of chatting to people in VRChat. The main downside is that there aren’t any professional events in VR that would interest me. Quest 3s are expensive ($500), but far cheaper than housing and office space in Berkeley or London.
I’d also flag:
1. I think that video calls can be dramatically improved with better microphone and camera setups. These can cost $200 to $2k or so, but make a major difference.
2. I’ve been doing some digging into platforms similar to GatherTown. I found GatherTown fairly ugly, off-putting, and limited. SpatialChat seems promising, though it’s more expensive. Zoom seems to be experimenting in the space with products like Zoom Huddles (for coworking in small groups), but these are new.
3. I like Focusmate, but think we could have better spaces for EAs/community members.
4. I think that people above the age of 25 or so find VR weird for what I’d describe as mostly status quo bias. Younger people seem to be far more willing and excited to hangout in VR.
5. I obviously think this is a larger business question. It seems like there was a wave of enthusiasm for remote work at COVID, and this has mostly dried up. However, there are still a ton of remote workers. My guess is that businesses are making a major mistake by not investing enough in better remote software and setups.
6. Organizing community is hard, even if its online. I’d like to see more attempts to pay people to organize online coworking spaces and meetups more.
7. I think that online events/conferences have become associated with the most junior talent. This seems like a pity to me.
8. I expect that different online events should come with different communities and different restrictions. A lot of existing online events/conferences are open to everyone, but then this means that they will be optimized for the most junior people. I think that we want a mix here.
9. Personally, I abhor the idea that I need to couple the place where I physically live with the friends and colleagues I have. I’d very much prefer optimizing for these two separately.
10. I think our community would generally be better off if remote work were easier to do. I’d expect this would help on multiple fronts—better talent, happier talent, lower expenses, more resilience from national politics, etc. This is extra relevant giving the current US political climate—this makes it tougher to recommend that others immigrate to the US or even visit (and the situation might get worse).
11. I’d definitely admit that remote work has a lot of downsides right now, especially with the current tech. So I’m not recommending that all orgs go remote. Just that we work on improving our remote/online infrastructure.
I want to agree, but “best people who ever lived” is a ridiculously high bar! I’d imagine that both of them would be hesitant to claim anything quite that high.
Happy to see this, but of course worried about the growth of insect farming. I didn’t realize it was so likely to grow.
One small point: I like that this essay goes into detail on a probabilistic estimate. I’d find it really useful if there were some other “sanity checks” from other parties to go along with that. For example, questions on Manifold or Metaculus, or even using AI forecasters/estimators to give their takes.
This strikes me as a pretty good forecasting question (very precise, not too far away). I imagine it would be easy to throw it on Manifold and spend $20 or so subsidizing it.
Yea, I broadly agree with Mjreard here.
The BlueDot example seems different to what I was pointing at.
I would flag that lack of EA funding power sometimes makes xrisk less of an issue.
Like, some groups might not trust that OP/SFF will continue to support them, and then do whatever they think they need to in order to attract other money—and this often is at odds with xrisk prioritization.
(I clearly see this as a issue with the broader world, not with OP/SFF)
Quickly:
1. I agree that this is tricky! I think it can be quite tough to be critical, but as you point out, it can also be quite tough to be positive.
2. One challenge with being positive to those in power is that people can have a hard time believing it. Like, you might just be wanting to be liked. Of course, I assume most people would still recommend you being honest, its just can be hard for others to know how to trust it. Also, the situation obviously changes when you’re complementing people without power. (i.e. emerging/local leaders)
I see it a bit differently.
> For example, it doesn’t seem like your project is at serious risk of defunding if you’re 20-30% more explicit about the risks you care about or what personally motivates you to do this work.
I suspect that most nonprofit leaders feel a great deal of funding insecurity. There’s always neat new initiatives that a group would love to expand to, and also, managers hate the risk of potentially needing to fire employees. They’re often thinking about funding on the margins—either they are nervous about firing a few employees, or they are hoping to expand to new areas.
> There are probably only about 200 people on Earth with the context x competence for OP to enthusiastically fund for leading on this work
I think there’s more competition. OP covers a lot of ground. I could easily see them just allocating a bit more money to human welfare later on, for example.
> My wish here is that specific people running orgs and projects were made of tougher stuff re following funding incentives.
I think that the issue of incentives runs deeper than this. It’s not just a matter of leaders straightforwardly understanding the incentives and taking according actions. It’s also that people will start believing things that are convenient to said incentives, that leaders will be chosen who seem to be good fits for the funding situation, and so on. The people who really believe in other goals often get frustrated and leave.
I’d guess that the leaders of these orgs feel more aligned with the OP agenda then they do the agenda you outline, for instance.
Happy to see development and funding in this field.
I would flag the obvious issue that a very small proportion of wild animals live in cities, given that cities take up a small proportion of the world. But I do know that there have been investigation into rats, which do exist in great numbers in cities.
The website for this project shows a fox—but I presume that this was chosen because it’s a sympathetic animal—not because foxes in cities represent a great deal of suffering.
I understand that tradeoffs need to be made to work with different funding sources and circumstances. But I’m of course curious what the broader story is here.
I definitely sympathize, though I’d phrase things differently.
As I’ve noted before, I think much of the cause is just that the community incentives very much come from the funding. And right now, we only have a few funders, and those funders are much more focused on AI Safety specifics then they are things like rationality/epistemics/morality. I think these people are generally convinced on specific AI Safety topics and unconvinced by a lot of more exploratory / foundational work.
For example, this is fairly clear at OP. Their team focused on “EA” is formally called “GCR Capacity Building.” The obvious goal is to “get people into GCR jobs.”
You mention a frustration about 80k. But 80k is getting a huge amount of their funding from OP, so it makes sense to me that they’re doing the sorts of things that OP would like.
Personally, I’d like to see more donations come from community members, to be aimed at community things. I feel that the EA scene has really failed here, but I’m hopeful there could be changes.
I don’t mean to bash OP / SFF / others. I think they’re doing reasonable things given their worldviews, and overall I think they’re both very positive. I’m just pointing out that they represent about all the main funding we have, and that they just aren’t focused on the EA things some community members care about.
Right now, I think that EA is in a very weak position. There just aren’t that many people willing to put in time or money to push forward the key EA programs and mission, other than using it as a way to get somewhat narrow GCR goals.
Or, in your terms, I think that almost no one is actually funding the “Soul” of EA, including the proverbial EA community.- May 9, 2025, 10:03 AM; 13 points) 's comment on The Soul of EA is in Trouble by (
I was thinking of Disagreeing.
On one hand, I’m very supportive of more people doing open-source development on things like this.
On the other, I think some people might think, “It’s open-source, and our community has tech people around. Therefore, people could probably do the maintenance work for free.”
From experience, it’s incredibly difficult to actually get useful open-source contributors, especially for long-term maintenance of apps that aren’t extraordinarily interesting and popular. So it can be a nice thing to encourage, but a tiny part of the big-picture strategic planning.