I’m currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
Ozzie Gooen
It sounds like we largely agree on a lot of this.
Small point, but I feel pretty uncomfortable with this statement, which I’ve heard before a bunch of times (in various forms):Billionaires don’t hoard wealth, but they invest it in companies and lend it to governments.
I very much know that billionaires don’t keep their wealth as literal cash.
But I feel like it’s generally fairly clear that:
1. These Billionaires are doing this just because it’s in their best interest. Investing and lending is not done altruistically, it’s done for profit. If these actions wound up producing zero or partially-negative global impact, I suspect Billionaires would still take them. (Related, I expect that at least several of these companies they invest in are net-negative)
2. It would be dramatically better if these people would donate this money to decent causes instead.
I wouldn’t value “$1 that a Billionaire is investing in the stock market” as literally having zero value, but I think most of us would place it quite low.I assume that as with many annoying debates, this is an issue of semantics. What does “hoarding” mean, or how would it be interpreted? I think it’s fair to classify a situation such as, “I’m going to keep a bunch of money for myself and try to do the conventional thing of maximizing it, which now means putting it in the stock market” as “hoarding”, but I realize others might prefer other terminology.
I decided to do a “deep search” with Perplexity. It largely determines that charitable workers come from privilege. This does seem to focus on charities that pay less than EA ones do, but I’d flag that even EA ones are very often significantly worse than similar corporate positions.
https://www.perplexity.ai/search/do-a-bunch-of-research-on-the-XUyjcqCdQGKHoyl1LCrgZA“The evidence strongly suggests that certain clusters within the nonprofit sector—particularly executive leadership, board positions, fundraising roles, and positions in smaller or resource-constrained organizations—disproportionately draw individuals from backgrounds of wealth and privilege. This pattern stems from structural factors including compensation practices, opportunity cost considerations, networking requirements, and the relationship between financial independence and leadership autonomy.”
Surprising compared to what reference class?
I’m not just talking about the most famous people, like Peter Singer or William MacAskill. I have a lot of more regular people in mind, like various employees at normal EA organizations that I don’t suspect many people here would individually know the names of.
I’m curious how much this would match other prestigious nonprofits. My quick guess is that EAs probably have more academic-leaning parents than those of the majority of other nonprofits. I’m sure that there are some higher-status nonprofits/organizations like UN, that have employees that come from wealth greater than the EA average.Do you expect that the median employee at The Salvation Army comes from a wealthy family?
My impression is that the salvation army is quite huge, and probably not particularly high-status among privileged crowds.
I’m curious why is it important for people in your EA community to assess how impressive someone is?
I’m thinking of situations where someone thinks to themselves, “Person X has achieved a lot more than I have, they must just be more motivated/intelligent.”.
I think anyone anywhere has reached their position through a combination of merit, family/social networks, and fortunate life circumstances.
I agree. I’m not sure how unusual EA is here. My main point is that it happens here—not that it doesn’t happen in other similar areas.
It seems like recently (say, the last 20 years) inequality has been rising. (Editing, from comments)Right now, the top 0.1% of wealthy people in the world are holding on to a very large amount of capital.
(I think this is connected to the fact that certain kinds of inequality have increased in the last several years, but I realize now my specific crossed-out sentence above led to a specific argument about inequality measures that I don’t think is very relevant to what I’m interested in here.)
On the whole, it seems like the wealthy donate incredibly little (a median of less than 10% of their wealth), and recently they’ve been good at keeping their money from getting taxed.
I don’t think that people are getting less moral, but I think it should be appreciated just how much power and wealth is in the hands of the ultra wealthy now, and how little of value they are doing with that.
Every so often I discuss this issue on Facebook or other places, and I’m often surprised by how much sympathy people in my network have for these billionaires (not the most altruistic few, but these people on the whole). I suspect that a lot of this comes partially from [experience responding to many mediocre claims from the far-left] and [living in an ecosystem where the wealthy class is able to subtly use their power to gain status from the intellectual class.]The top 10 known billionaires have easily $1T now. I’d guess that all EA-related donations in the last 10 years have been less than around $10B. (GiveWell says they have helped move $2.4B). 10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that. At this point it’s looking less optimistic. I think the world has quite a bit more wealth, more key problems, and more understanding of how to deal with them then it ever had before, but still this hasn’t been enough to make much of a dent in effective donation spending.
At the same time, I think it would be a mistake to assume this area is intractable. While it might not have improved much, in fairness, I think there was little dedicated and smart effort to improve it. I am very familiar with programs like The Giving Pledge and Founders Pledge. While these are positive, I suspect they absorb limited total funding (<$30M/yr, for instance.) They also follow one particular highly-cooperative strategy. I think most people working in this area are in positions where they need to be highly sympathetic to a lot of these people, which means I think that there’s a gap of more cynical or confrontational thinking.
I’d be curious to see the exploration of a wide variety of ideas here.
In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. It doesn’t even have to go to particularly effective places—even ineffective efforts could add up, if enough money is thrown at them.
Of course, this would have to be done gracefully. It’s easy to imagine a situation where the ultra-wealthy freak out and attack all of EA or similar. I see work to curtail factory farming as very analogous, and expect that a lot of EA work on that issue has broadly taken a sensible approach here.
From The Economist, on “The return of inheritocracy”
> People in advanced economies stand to inherit around $6trn this year—about 10% of GDP, up from around 5% on average in a selection of rich countries during the middle of the 20th century. As a share of output, annual inheritance flows have doubled in France since the 1960s, and nearly trebled in Germany since the 1970s. Whether a young person can afford to buy a house and live in relative comfort is determined by inherited wealth nearly as much as it is by their own success at work. This shift has alarming economic and social consequences, because it imperils not just the meritocratic ideal, but capitalism itself.
> More wealth means more inheritance for baby-boomers to pass on. And because wealth is far more unequally distributed than income, a new inheritocracy is being born.
Also, quickly:
I don’t blame these people for having good circumstances. Of all the things one could do with privilege/gifts, I think that [working in effective areas] is about as good as it gets.
These people will generally not loudly talk about their background much. It’s an awkward topic to bring it up, and raising it could easily do more harm than good.
I myself have had a bunch of advantages I’m grateful for, I’m very arguably in this crowd.
There’s clearly a challenging question of how status should be thought about in such a community. It seems very normal for communities to develop some sort of social hierarchy, typically for reasons that are mostly unfair (and it’s not clear what a fair hierarchy even means). I think the easy thing to argue is that people should generally be slow to either idolize individuals who seem to do well, or to think poorly of people who seem to be ineffective/bad.
A surprising number of EA researchers I know have highly accomplished parents. Many have family backgrounds (or have married into families) that are relatively affluent and scientific.
I believe the nonprofit world attracts people with financial security. While compensation is often modest, the work can offer significant prestige and personal fulfillment.
This probably comes with a bunch of implications.
But the most obvious implication to me, for people in this community, is to realize that it’s very difficult to access how impressive specific individual EAs/nonprofit people are, without understanding their full personal situations. Many prominent community members have reached their positions through a combination of merit, family/social networks, and fortunate life circumstances.
One specific related take that I’d note is that I’ve noticed that the top specific people donors (I.E. Jaan Tallinn / Dustin Moskovitz / others, not OP) get a great amount of respect and deference.
I think that these people are creating huge amounts of value.
However, I think a lot of people assume that the people contributing the most to EA (these donors) are the “most intense and committed EAs”, and I think that’s not the case. My impression is that these donors, while very smart and hardworking, are quite distinct from most “intense and committed EAs.” They often have values and beliefs that are fairly different. I suspect that they donate to EA causes not because they are incredibly closely aligned, but because EA causes/organizations represent some of the closest options available of existing charity options.
Again, I think that their actions are overall quite good and that these people are doing great work.
But at the same time, when I look at the people I find most inspiring, or that I’d personally place the greatest trust if I had a great deal of money, I’d probably place it more in others I see on the extreme of hard working, intelligent, and reasonable, who often are researchers with dramatically lower net worths.
Correspondingly, one of the things I admire most about many top donors is their ability to defer to others who are better positioned to make specific choices. Normally, “Finding the best people for the job, accepting it’s not you, and then mostly getting out of the way” is about the best you can do as an individual donor.
I broadly agree with this!
At the same time, I’d flag that I’m not quite sure how to frame this.If I were a donor to 80k, I’d see this action as less “80k did something nice for the EA community that they themselves didn’t benefit from” and more “80k did something that was a good bet in terms of expected value.” In some ways, this latter thing can be viewed as more noble, even though it might be seen as less warm.
Basically, I think that traditional understandings of “being thankful” sort of break down when organizations are making intentional investments that optimize for expected value.
I’m not at all saying that this means that these posts are less valuable or noble or whatever. Just that I’d hope we could argue that they make sense strictly through the lens of EV optimization, and thus don’t need to rely as much on the language of appreciation.
(I’ve been thinking about this with other discussions)
I generally believe that EA is effective at being pragmatic, and in that regard, I think it’s important for the key organizations that are both giving and receiving funding in this area to coordinate, especially with topics like funding diversification. I agree that this is not the ideal world, but this goes back to the main topic.
For reference, I agree it’s important for these people to be meeting with each other. I wasn’t disagreeing with that.
However, I would hope that over time, there would be more people brought in who aren’t in the immediate OP umbrella, to key discussions of the future of EA. At least have like 10% of the audience be strongly/mostly independent or something.
The o1-preview and Claude 3.5-powered template bots did pretty well relative to the rest of the bots.
As I think about it, this surprises me a bit. Did participants have access to these early on?
If so, it seems like many participants underperformed the examples/defaults? That seems kind of underwhelming. I guess it’s easy to make a lot of changes that seem good at the time but wind up hurting performance when tested. Of course, this raises the point that it’s concerning that there wasn’t any faster/cheaper way of testing these bots first. Something seems a bit off here.
I think you raise some good points on why diversification as I discuss it is difficult and why it hasn’t been done more.
Quickly:
> I agree with the approach’s direction, but this premise doesn’t seem very helpful in shaping the debate.
Sorry, I don’t understand this. What is “the debate” that you are referring to?
> At the last, MCF funding diversification and the EA brand were the two main topics
This is good to know. While mentioning MCF, I would bring up that it seems bad to me that MCF seems to be very much within the OP umbrella, as I understand it. I believe that it was funded by OP or CEA, and the people who set it up were employed by CEA, which was primarily funded by OP. Most of the attendees seem like people at OP or CEA, or else heavily funded by OP.
I have a lot of respect for many of these people and am not claiming anything nefarious. But I do think that this acts as a good example of the sort of thing that seems important for the EA community, and also that OP has an incredibly large amount of control over. It seems like an obvious potential conflict of interest.
Agreed that this would be good. But it can be annoying to do without additional tooling.
I’d like to see tools that try to ask a question from a few different angles / perspectives / motivations and compare results, but this would be some work.
Quickly:
1. Some of this gets into semantics. There are some things that are more “key inspirations for what was formally called EA” and other things that “were formally called EA, or called themselves EA.” GiveWell was highly influential around EA, but I think it was created before EA was coined, and I don’t think they publicly associated as “EA” for some time (if ever).
2. I think we’re straying from the main topic at this point. One issue is that while I think we disagree on some of the details/semantics of early EA, I also don’t think that matters much for the greater issue at hand. “The specific reason why the EA community technically started” is pretty different from “what people in this scene currently care about.”
That’s useful, thanks!
When having conversations with people that are hard to reach, it’s easy for discussions to take ages.
One thing I tried doing is for me to have a brief back-and-forth with Claude, asking it to provide all the key arguments against me. Then I’d make the conversation public, send a link to the chat, and ask the other person to see that. I find that this can get through a lot of the beginning points on complex topics, with minimal human involvement.
I often second-guess my EA Forum comments with Claude, especially when someone mentions a disagreement that doesn’t make sense to me.
When doing this I try to ask it to be honest / not be sycophantic, but this only helps so much, so I’m curious for better prompts to prevent sycophancy.
I imagine at some point all my content could go through an [can I convince an LLM that this is reasonable and not inflammatory] filter. But a lower bar is just doing this for specific comments that are particularly contentious or argumentative.
This is pretty basic, but seems effective.
In the Claude settings you can provide a system prompt. Here’s a slightly-edited version of the one I use. While short, I’ve found that this generally seems to improve conversations for me. Specifically, I like that Claude seems very eager to try estimating things numerically. One weird but minor downside though is that it will sometimes randomly bring up items here in conversation, like, “I suggest writing that down, using your Glove80 keyboard.”
I’m a 34yr old male, into effective altruism, rationality, transhumanism, uncertainty quantification, monte carlo analysis, TTRPGs, cost-benefit analysis. I blog a lot on Facebook and the EA Forum.
Ozzie Gooen, executive director of the Quantified Uncertainty Research Institute.
163lb, 5′10, generally healthy, have RSI issues
Work remotely, often at cafes and the FAR Labs office space.
I very much appreciate it when you can answer questions by providing cost-benefit analyses and other numeric estimates. Use probability ranges where is appropriate.
Equipment includes: Macbook, iPhone 14, Airpods pro 2nd gen, Apple Studio display, an extra small monitor, some light gym equipment, Quest 3, theragun, airtags, Glove80 keyboard using Colemak DH, ergo mouse, magic trackpad, Connect EX-5 bike, inexpensive rowing machine.
Heavy user of VS Code, Firefox, Zoom, Discord, Slack, Youtube, YouTube music, Bear (notetaking), Cursor, Athlytic, Bevel.
Favorite Recent LLM Prompts & Tips?
I think you bring up a bunch of good points. I’d hope that any concrete steps on this would take these sorts of considerations in mind.
> The concerns implied by that statement aren’t really fixable by the community funding discrete programs, or even by shelving discrete programs altogether. Not being the flagship EA organization’s predominant donor may not be sufficient for getting reputational distance from that sort of thing, but it’s probably a necessary condition.
I wasn’t claiming that this funding change would fix all of OP/GV’s concerns. I assume that would take a great deal of work, among many different projects/initiatives.
One thing I care about is that someone is paid to start thinking about this critically and extensively, and I imagine they’d be more effective if not under the OP umbrella. So one of the early steps to take is just trying to find a system that could help figure out future steps.
> I speculate that other concerns may be about the way certain core programs are run—e.g., I would not be too surprised to hear that OP/GV would rather not have particular controversial content allowed on the Forum, or have advocates for certain political positions admitted to EAGs, or whatever.
I think this raises an important and somewhat awkward point that levels of separation between EA and OP/GV would make it harder for OP/GV to have as much control over these areas, and there are times where they wouldn’t be as happy with the results.
Of course:
1. If this is the case, it implies that the EA community does want some concretely different things, so from the standpoint of the EA community, this would make funding more appealing.
2. I think in the big picture, it seems like OP/GV doesn’t want to be held as responsible for the EA community. Ultimately there’s a conflict here—on one hand, they don’t want to be seen as responsible for the EA community—on the other hand, they might prefer situations where they can have a very large amount of control over the EA community. I hope it can be understood that these two desires can’t easily go together. Perhaps they won’t be willing to compromise on the latter, but also will complain about the former. That might well happen, but I’d hope there could be a better arrangement made.
> OP/GV is usually a pretty responsible funder, so the odds of them suddenly defunding CEA without providing some sort of notice and transitional funding seems low.
I largely agree. That said, if I were CEA, I’d still feel fairly uncomfortable. When the vast majority of your funding comes from any one donor, you’ll need to place a whole lot of trust in them.
I’d imagine that if I were working within CEA, I’d be incredibly precautious not to upset OP or GV. I’d also imagine this to mess with my epistemics/communication/actions.
Also, of course, I’d flag that the world can change quickly. Maybe Trump will go on a push against EA one day, and put OP in an awkward spot, for example.
Just flagging that I very much agree it would be good to tax them far more. However, I’m not sure how doable that is vs. other things. More thinking/research on options here seems good to me.
I think there’s generally been recognition by many academics and intellectuals that the rich get a highly overly-favorable deal, due to their capture of politics.