EA Finland chairperson and an aspiring rationalist
Kerkko Pelttari
Regarding missing gears and old books, I have recently been thinking that many EAs (myself included) have a lot of philosophical / cultural blind spots regarding various things (one example might be postmodernist philosophy). It’s really easy to developer a kind of confidence, with narratives like “I have already thought about philosophy a lot” (when it has been mostly engagement with other EAs and discussions facilitated on EA terms) or “I read a lot of philosophy” (when it’s mostly EA books and EA-aligned / utilitarianist / longtermist papers and books).
I don’t really know what the solutions for this are. On a personal level I think perhaps I need to read more old books or participate in reading circles where non-EA books are read.
I don’t really have the understanding of liberalism to agree or disagree with EA being engaged with mainstream liberalism, but I would agree that EA as a movement has a pretty hefty “pro-status quo” bias in it’s thinking, and especially in it’s action quite often. (There is an interesting contradiction here in EA views often being pretty anti-mainstream though, like thought on AI x-risks, longtermism and wild animal welfare.)
Why would it permanently tarnish the movement?
FWIW I don’t know why you’re being disagreement voted, I broadly agree. I think the money amounts at play here are enough to warrant an investigation even with a low possibility of uncovering something significant.
I disagree with paying back being obviously the right thing to do. The implications of “pulling back” money whenever something large shady appears would be difficult to handle, and it would be costly. (If you are arguing that the current case is special and in future cases of alleged / proven financial crime we should evaluate case by case then I am very interested in what the specific argument is.)
I would look into options for vetting integrity of big donors in the future as the right thing to do though.
Another approach could be to be more proactive in taking funding assets in advance and liquidating and holding them in fiat (or other stable) currency. (e.g. ask big highly EA sympatethic donors to fund very long periods of funding at once if in any way possible.)
Altough your argument may make a more convincing case for the funders to fund, since the money will actually be spent quickly.
Polymarket question about will Binance cancel the FTX bailout deal: https://polymarket.com/market/will-binance-pull-out-of-their-ftx-deal (The question is in reverse phrasing related to some other markets.)
As a FYI for anyone trying to analyze the probabilities of this situation, on “real money” (altough it’s crypto money) prediction market Polymarket the odds of the deal continuing are 45% vs. 55% odds of being pulled off from the table as of posting this message. https://polymarket.com/market/will-binance-pull-out-of-their-ftx-deal
The data might be noisy because of some people possibly using the market to hedge their crypto positions but I would still rate this data in the same ballpark of reliability as manifold markets data, most important reason being that Polymarket is popular with crypto people whereas Manifold Markets is popular with EA / rationalist people, who have possibly a very one-sided view on this current FTX trouble.
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Good questions, I have ended up thinking about many of these topics ofren.
Something else where I would find improved transparency valuable would be what are the back of envelope calcs and statistics for denied fundings. Reading EA funds reports for example doesn’t give a total view into where the current bar for interventions is, because we’re only seeing the project distribution from above the cutoff point.
I read a blog post by Abraham Lincoln once and I think the core point was that EA is talent overhung instead of talent constrained.
Since this removes the core factor of impact from the project, it rounds most expected values down to 0, which is an improvement. You can thank me in the branches that would have otherwise suffered destruction by tail risk.
“There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem;”
At least in this hypothetical example it would seem naively ineffective (not taking into account things like signaling value) to pay people more salary for same output. (And fwiw here I think qualities like employee wellbeing is part of “output”. But it is unclear how directly salary helps that area.)
Perhaps a general “willingness to commit” X % funding to criticism of areas which are heavily funded by the EA-aligned funding organization could work as a general heuristic for enabling the second idea.
(e.g. if “pro current X-risk” research in general gets N funding then some % of N would be made available for “critical work” in the same area. But in science it can be sometimes hard to even say which is a critical work and which is a work that builds on top of existing work.)
I’m not affiliated with EA research organizations at all (I participate in running a local group at Finland and am looking for industry / other EA affiliated career options more so than specifically research).
However I have had multiple discussions with fellow local EA:s where it was deemed problematic that some X-risk papers are subject to quite “weak” standards of criticism relative to how much they often imply. Heartfelt thanks to you both for publishing and discussing this topic. And starting up conversation on the important meta-topic of EA research topic and funding decisionmaking and standards.
Same, I only had ~800 mana free but wouldn’t have realized to donate it otherwise, and it only took a minute.