There’s a big effort to increase access to clean cooking, especially in Africa. So it’s entirely possible that this kind of intervention could be bundled with projects that are already giving away stoves (often in exchange for carbon credits now). I actually know someone leading a project like this in west Africa, no idea how different bean-cooking practices are in West Africa vs Uganda, but I could ask!
samuel
Thanks for the feedback Dan. Maybe I’m using the vocabulary incorrectly—does collective specifically mean 1 person 1 vote? I do specifically avoid saying democratic and mention market-based decision making in the first sentence.
It’s not at all obvious to me that putting market-based feedback systems in place would look like the funding situation today. I think it’s worth pushing back on the assumption that EA’s current funding structure rewards the best performers in terms of asset allocation.
I want to push back a bit on my own intuition, which is that trying to build out collective (or market-based) decision-making for EA funding is potentially impractical and/or ineffective. Most EAs respect the power of the “wisdom of crowds”, and many advocate for prediction markets. Why exactly does this affinity for markets stop at funding? It sounds like most think collective decision-making for funding is not feasible enough to consider, and that’s 100% fair, but were it easy to implement, would it be ineffective?
Again, my intuition is to trust the subject matter experts, to rely on the institutions that we’ve built for this specific task. But I invest in index funds, I believe that past performance is no guarantee of future results, and I trust that aggregate markets are typically more accurate than most experts. Have EA organizations proved that they are essentially super-forecasters, that they consistently “beat the EA market” in terms of ROI? Perhaps this metaphor is doomed—these EA orgs are also market-makers as well. Who better to place bets than those with insider knowledge?
At the very least, this experiment seems ripe for running if it hasn’t been already. It’s far beyond me to figure out how to structure it, I’ll leave that to those like Nuno, who laid out a potential path. But we’re making a rather large assumption that the collective is by default ineffective.
EDIT: someone pointed out that I’m conflicting prediction markets w/ collective decision making. I want to clarify that my comment is referring to market-based decision making (basically prediction markets), which I view as a subset of collective decision making. Maybe my EA vocab is off though.
I wouldn’t call it predatory—in fact, every significant work test / trial I’ve done has been paid, which is remarkably progressive!
However, I empathize with your pain—interviewing for EA jobs is a rigorous and rather impersonal gambit. As far as I know, this is a feature not a bug. It’s frustrating but I try to cut them some slack. There are many applicants, EA orgs are almost always short-staffed and they’re trying to avoid bias. Most EAs want an EA job but these hiring processes are optimized to test this desire.
Knowing this, I don’t bother applying for an EA job unless I truly think that my application can be competitive and that I actually want the job (not a bad heuristic to follow in general).
I’m hopeful for lab grown salmon (see: Wild Type Foods), but if all else fails and the taste for salmon proves to be too sticky, I could imagine a counterintuitive campaign that specializes salmon to be “only for holidays.” Of course, I’m sure this could easily backfire. This kind of work is hard!
Could an increase in salmon preference on Christmas also lead to higher preference for salmon year-round? More people are introduced to the fish, learn how to cook it, etc. Perhaps another downstream effect to consider in your model, although difficult to quantify and hard to know if your campaign has much of an impact here.
I’m very thankful for EVF and associated orgs, and as referenced by others, it’s understandable how/why the community is currently organized this way. Eventually, depending on growth and other factors, it’ll probably make sense for the various subs to legally spin off, but I’m not sure if this is high priority—it depends on just how worried EAs are about governance in the wake of this past month.
I will say, conflict of interest disclosures are important but seems like they may be doing a lot of work here. As far as I can tell[1], leadership within these organizations also function independently and they’re particularly aware of bias as EAs so they’ve built processes to mitigate this. But being aware of bias and disclosing it doesn’t necessarily stop [trustworthy] people from being biased (see: doctors prescribing drugs from companies that pay for talks.) Even if these organizations separated tomorrow, I’d half expect them to be in relative lock-step for years to come. Even if these orgs never shared funding/leadership again, they’re in the same community, they’ll have funders in common, they’ll want to impress the same people, so they’ll make decisions with this in mind. I’ve seen this first-hand in every [non-EA] org I’ve ever been a part of, in sectors of all sizes, so moving forward we’ll have to build with this bug in mind and decide just how much mitigation is worth doing.
I’m aware that none of this is original or ground-breaking but perhaps worth reiterating.
- ^
This is a little facetious, but does anyone else find themselves caveating more often these days, just in case...
- ^
“My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things”—I agree that this is most likely true, but my point is that it’s difficult to suss out the “real” EAs using the criteria listed. Many billionaires believe that the best course of philanthropic action is to continue accruing/investing money before giving it away.
Anyways, my point is more academic than practical, the FTX fraud seems pretty straight forward and I appreciate your take. I wonder if this forum would be having the same sorts of convos after Thanos snaps his fingers.
I don’t [currently] view EA as particularly integral to the FTX story either. Usually, blaming ideology isn’t particularly fruitful because people can contort just about anything to suit their own agendas. It’s nearly impossible to prove causation, we can only gesture at it.
However, I’m nitpicking here but—is spending money on naming rights truly evidence that SBF wasn’t operating under a nightmare utilitarian EA playbook? It’s probably evidence that he wasn’t particularly good at EA, although one could argue it was the toll to further increase earnings to eventually give. It’s clearly an ego play but other real businesses buy naming rights too, for business(ish) reasons, and some of those aren’t frauds… right?
I nitpick because I don’t find it hard to believe that an EA could also 1) be selfish, 2) convince themselves that ends justify the means and 3) combine 1&2 into an incendiary cocktail of confused egotism and lumpy, uneven righteousness that ends up hurting people. I’ve met EAs exactly like this, but fortunately they usually lack the charm, knowhow and/or resources required to make much of a dent.
In general, I’m not surprised with the community’s reaction. Best case scenario, it had no idea that the fraud was happening (and looks a bit naïve in hindsight) and its dirty laundry is nonetheless exposed (it’s not so squeaky clean after all). Even if EA was only a small piece in the machinery that resulted in such a [big visible] fraud, the community strives to do *important* work and it feels bad for potentially contributing to the opposite.
Thanks for the feedback, I appreciate it! SBF has clearly been interested in EA for a long time, but taking him seriously as a thought leader is pretty new. @donychristie mentioned that he was an early poster child of earning-to-give, which I also vaguely remember, but his elevation in status is a recent phenomenon.
Regardless, my main point is that EA should be sensitive to the reputation of its funders. Stuff like this feels off even if it may come from a well-intentioned place.
I was honestly surprised how quickly SBF was “platformed” by EA (but not actually surprised, he was a billionaire shoveling money in EA’s direction). One day I looked up and he was everywhere. On every podcast I follow, fellow EAs quoting him, one EA told me how much they wanted to meet his brother… it felt unearned/uncanny. For me, a main takeaway is that the community should be more cautious about the partners that it aligns with and also create a more resilient infrastructure to mitigate blowback when this stuff happens (it’ll happen again, it always does with wealthy donors). When the major consultancies recently started getting flack for unsavory clients, they spun up teams to assess the ethical aspect of contracts and started turning down business that didn’t align with certain standards.
FYI I’m not a “de-platforming” person, just felt like SBF immediately became a highly visible EA figure for no good reason beyond $$$.
Interested to hear why people are downvoting this comment… would love to engage in a discussion!
I wanted to keep the meat of my argument above as concise as possible, but also want to mention that EAs largely fail to grasp 1) what politics do to politicians and 2) the unknowable, cascading, massive impacts of political decisions. Politicians change their minds, trade votes, compromise, make decisions based on reelection. And the decisions they make reverberate. None of this is predictable or measurable, so it’s hard to imagine how to classify it as effective altruism.
I appreciate you laying out the specifics here! As someone who grew up in/around politics, the ineffectiveness of a freshman member of congress feels obvious. I want to amplify the concern for politics & EA.
EA should seriously consider drawing the line at financial support. Some EAs want EA-aligned candidates to run, and that generally feels like a good idea. Rational politicians who care about important issues are better, right? They know what’s best? Let’s assume that’s true, even if that’s quite an assumption to make. Representatives vote on every bill, many of which have little to do with EA. How should we expect an EA candidate to vote on non-EA issues? If EA publicly and significantly backs a specific candidate, EA becomes at least a little culpable for all of a candidate’s views, not just the EA ones. Furthermore, there’s no guarantee that a candidate will vote how they say they’ll vote. Moreover, even if they do vote how they say they’ll vote, that doesn’t guarantee results, whether that be winning a vote or operationalizing a government program that proves to be effective. There’s so much uncertainty here. How can we as EAs truly calculate return on investment in campaign politics? I don’t think we can with any real accuracy. There’s nothing wrong with supporting candidates that you like, but this seems to fall far short of what we typically expect in terms of evidence. It feels like informed voting, not EA.
Agree that running EA candidates may polarize issues that are refreshingly nonpartisan. This would be an own-goal of sizable consequence.
Politics is a high-leverage arena, so it’s logical that EAs are attracted to it, especially now that there’s money floating around. EA as a (mostly) nonpartisan movement has higher potential with less downside. Channeling the community’s energy into lobbying and advocating for EA-aligned policy is straightforward, effective and transparent. “This strongly suggests that influencing current elected officials, rather than attempting to directly hold political power, plays more towards our strengths.” I couldn’t agree more.
ACT recently did a write-up on nightmares if you’re interested!
+1 - Ecosystem services (and more generally, Earth systems) are infamously hard to pin down, which is why I often taken any bottom line analyses of climate change with gigantic grains of salt (in both directions). For example, there’s currently a gold rush on technology to quantify the value of soil sequestration, forest sequestration, etc, and as far as I can tell, experts are still bickering over the basics on how to calculate these data with any accuracy. Those are just a few small pieces of a very very large pie that is difficult to value. Perhaps the modeling takes these massive uncertainties into consideration, but I’m skeptical (and will have to do some research of my own).
Lots of good stuff here! I work in the climate change field so I have expertise here, although it’s crucial to note I haven’t spent my career comparing the risk that climate change poses relative to the other big topics that concern EAs.
It’s not surprising given my biases that I always grimace a little when EAs talk about climate. It’s an easy target—lots of attention, tons of media hubbub, plenty of misinformed opinions and outright grifters, and of course, lack of direct existential threat. Hey look, here’s an issue that most EAs care about that’s already getting attention and talent, and if you run the numbers, according to our values...that’s more than enough attention! So come work on an underserved issue like AI or pandemic risk! It makes sense to use it as a point of contrast and I’m glad that 80K Hours still takes climate change seriously. However, the framing could maybe be better, I’m not sure, I need to think about it more.
One small qualm within the well researched piece—the plastic bag bit is off. Disregarding the fact that plastic bag fees aren’t just about carbon reductions, that graph shows that as long as you don’t make reusable bags out of cotton, reusable bags do exactly what you want them to do. Now, that’s not to say those policies are great, there’s plenty of issues with them, but I don’t find the example to be compelling evidence, especially because no policy demands cotton bags nor do most people use cotton bags. I don’t remember that Danish LCA to be particularly good either.
Nick—absolutely! Making relocation more effective is imperative whether it be international or domestic. I believe that domestic migration is wildly underserved but the work done on that topic can and should be expanded to help facilitate immigration.
Thanks for sharing, Chris! I’ve been meaning to reach out to Teleport for a while to learn about their offerings. They’ve put together some decent data but the UI lacks something integral. I do like their intake survey as a way to narrow choices (a la @evelynciara’s comment). The entire platform feels… abandoned? Could be a good partner down the line for the data side.
Thanks for this summary. I listened to this yesterday & browsed through the SH subreddit discussion, and I’m surprised that this hasn’t received much discussion on here. Perhaps the EA community is talked out about this subject, which, fair enough. But as far as I can tell, it’s one of Will’s first at-length public remarks on SBF, so it feels discussion-worthy to me.
I agree that the discussion was oddly vague given all the actual evidence we have. I don’t feel like going into much detail but a few things I noticed:
It seems that Will is still somewhat in denial that SBF was a fraud. I guess this is a perfectly valid opinion, Will knows SBF, but I can’t help but feel that this is naive (or measured, if we’re to be charitable). We can quibble over his reasoning, but fraud is fraud, and SBF committed a lot of it, repeatedly and at scale. He doesn’t have to be good at fraud to be one.
They barely touch on the dangers of a “maximalist” EA. If Will doesn’t believe SBF was fraudulent for regular greedy reasons, then EA may have played a part, and that’s worth considering. No need to be overly dramatic here, the average EA is well-intentioned and not going to do anything like this… but as long as EA is centralized and reliant on large donors, it’s something we need to further think through.
The podcast is a reminder of how difficult it is to understand motivations, and how difficult it is for “good” people to understand what motivates “bad” actions (adding “” to acknowledge the big vast gray areas here). Given that there are a lot of neuro-atypical EAs, the community seems desensitized to potential red flags like claiming not to feel love. This is my hobbyhorse—like any community, EA has smart capable people that I wouldn’t ever want to have power. It sounds like there were people who felt this way about SBF, including a cofounder. It’s a bit shocking to me that EA leadership didn’t see SBF as much of a liability.