Aspiring rationalist. Trying to do my best to pay the world back for all the good it gives me.
EA Finland’s volunteer Executive Director September 2025-
I’m curious about problems of consciousness, moral status, and technical AI safety.
Aspiring rationalist. Trying to do my best to pay the world back for all the good it gives me.
EA Finland’s volunteer Executive Director September 2025-
I’m curious about problems of consciousness, moral status, and technical AI safety.
(In general, I suspect there’s insufficient social pressure on people to increase our donations to good causes
I think in general if we agree to a ballpark of “10% donations is enough to satisfice some goodness thresholds”, and also to “It would be good for social pressure to exist for everyone to do at least threshold amount of good”, I think it raises various considerations.
10% makes sense to me as a schelling point (and I think the tables that scale by income bracket are also sensible).
But if the threshold amount of good would be “Donate 10%, aim for an impactful career, become vegan” (which is what I feel the social pressure inside EA is pointing towards), I think that is already a significant ask for many people.
I think it is also important to note that some people are more motivated by trying to maximize impact and offset harm, and some people more motivated by minimizing harm and satisficing for impact. (Of course a standard total utilitarian model would output that whatever maximizes your net impact is best, but human value systems aren’t perfectly utilitarian.)
How do “donate 10%, become vegan, aim for an impactful”, and “donate 30%”, and “donate 20%, aim for an impactful career” compare in effectiveness as norms? I think this is pretty hard to estimate.
What kind of social pressure are you pointing here? Is it more in the direction of “donate 30%” or “donate as much as you can and aim for an impactful career?” Or do you mean social pressure in the wider society, and not within the EA community?
(Fwiw I think people underestimate the value of effective marginal spending on themselves, when considering areas of spending where there is space for significant extra value (Like purchasing more free time.). People plausibly overestimate the value on some other things, especially if one doesn’t do spending introspectiont.)
Morality is Objective
I find it intuitive that there could be a small set of objective moral facts but much smaller than is generally believed for moral realist positions + i do not think this can be justified rationally to a large degree. I think there can be contextual moral facts (as in “rational agents in a society would agree to cooperate on problem X” or “rational agents would agree to behave in a certain way on a moral problem, given following constraints”), but I do not think these are enough to justify an objective moral realism position.
I think the set of sensible moral views and positions is large, and thus think that morality is mostly not objective.
I don’t know if this will be a useful comment but putting it here anyway. Personally, and most other people who feel like they are being judged too much or have too much performance pressure in EA, often do it themselves. I think there is a causation in how EA material and dynamics can facilitate people being more self-critical than is healthy, and to me that seems like a much more common problem than being judged by other EAs. (Didn’t notice a survey question that would measure the thing I’m trying to point at here.)
The role of guilt and perfectionism in EA, and how EA as an environment focused on efficiency and doing the very best we can can lead to difficult mental hangups, and be more demanding than more traditional ways of doing good. (Traditional ways of doing good are often more focused in feeling good and feeling altruistic, which is useful for the good-doers wellbeing but suboptimal for actual amount of good done.)
Positivity focused ethics! The inbalance between negativity-biased vs. positively biased versions of utilitarianism, and implications of this on evaluating policy ideas and the medium-term future.
Any hints / info on what to look for in a mentor / how to find one? (Specifically for community building.)
I’m starting as a national group director in september, and among my focus topics for EAG London are group-focused things like “figuring out pointers / out of the box ideas / well-working ideas we haven’t tried yet for our future strategy”, but also trying to find a mentor.
These were some thoughts I came up with when thinking about this yesterday:
- I’m not looking for accountability or day to day support. I get that from inside our local group.
- I am looking for someone that can take a description of the higher level situation and see different things than I can. Either due to perspective differences or being more experienced and skilled.
- Also someone who can give me useful input on what skills to focus on building in the medium term.
- Someone whose skills and experience I trust, and when they say “plan looks good” it gives me confidence, when I’m trying to do something that feels to me like a long shot / weird / difficult plan and I specifically need validation that it makes sense.
On a concrete level I’m looking for someone to have ~monthly 1-1 calls with and some asynchronous communication. Not about common day to day stuff but larger calls.
Same, I only had ~800 mana free but wouldn’t have realized to donate it otherwise, and it only took a minute.
Regarding missing gears and old books, I have recently been thinking that many EAs (myself included) have a lot of philosophical / cultural blind spots regarding various things (one example might be postmodernist philosophy). It’s really easy to developer a kind of confidence, with narratives like “I have already thought about philosophy a lot” (when it has been mostly engagement with other EAs and discussions facilitated on EA terms) or “I read a lot of philosophy” (when it’s mostly EA books and EA-aligned / utilitarianist / longtermist papers and books).
I don’t really know what the solutions for this are. On a personal level I think perhaps I need to read more old books or participate in reading circles where non-EA books are read.
I don’t really have the understanding of liberalism to agree or disagree with EA being engaged with mainstream liberalism, but I would agree that EA as a movement has a pretty hefty “pro-status quo” bias in it’s thinking, and especially in it’s action quite often. (There is an interesting contradiction here in EA views often being pretty anti-mainstream though, like thought on AI x-risks, longtermism and wild animal welfare.)
Why would it permanently tarnish the movement?
FWIW I don’t know why you’re being disagreement voted, I broadly agree. I think the money amounts at play here are enough to warrant an investigation even with a low possibility of uncovering something significant.
I disagree with paying back being obviously the right thing to do. The implications of “pulling back” money whenever something large shady appears would be difficult to handle, and it would be costly. (If you are arguing that the current case is special and in future cases of alleged / proven financial crime we should evaluate case by case then I am very interested in what the specific argument is.)
I would look into options for vetting integrity of big donors in the future as the right thing to do though.
Another approach could be to be more proactive in taking funding assets in advance and liquidating and holding them in fiat (or other stable) currency. (e.g. ask big highly EA sympatethic donors to fund very long periods of funding at once if in any way possible.)
Altough your argument may make a more convincing case for the funders to fund, since the money will actually be spent quickly.
Polymarket question about will Binance cancel the FTX bailout deal: https://polymarket.com/market/will-binance-pull-out-of-their-ftx-deal (The question is in reverse phrasing related to some other markets.)
As a FYI for anyone trying to analyze the probabilities of this situation, on “real money” (altough it’s crypto money) prediction market Polymarket the odds of the deal continuing are 45% vs. 55% odds of being pulled off from the table as of posting this message. https://polymarket.com/market/will-binance-pull-out-of-their-ftx-deal
The data might be noisy because of some people possibly using the market to hedge their crypto positions but I would still rate this data in the same ballpark of reliability as manifold markets data, most important reason being that Polymarket is popular with crypto people whereas Manifold Markets is popular with EA / rationalist people, who have possibly a very one-sided view on this current FTX trouble.
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Good questions, I have ended up thinking about many of these topics ofren.
Something else where I would find improved transparency valuable would be what are the back of envelope calcs and statistics for denied fundings. Reading EA funds reports for example doesn’t give a total view into where the current bar for interventions is, because we’re only seeing the project distribution from above the cutoff point.
I read a blog post by Abraham Lincoln once and I think the core point was that EA is talent overhung instead of talent constrained.
Since this removes the core factor of impact from the project, it rounds most expected values down to 0, which is an improvement. You can thank me in the branches that would have otherwise suffered destruction by tail risk.
“There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem;”
At least in this hypothetical example it would seem naively ineffective (not taking into account things like signaling value) to pay people more salary for same output. (And fwiw here I think qualities like employee wellbeing is part of “output”. But it is unclear how directly salary helps that area.)
I was quite worried when I saw the tweets going around, as well. I think the implications of the possibility of LLM sentience are already ethically quite large and biological computing intuitively has a much larger change of inducing sentience in the “computers”.
But I’m broadly not sold on the moratorium concept.
I mean, a global moratorium would definitely be the ethically careful choice to do here. But I think if even a couple of countries allow the building of these datacenters, and other countries allow the purchase of biological computing, then it would be important to act on this in other ways than moratoriums as well.
Something like mandating x amount of research into the ethical implications of this per y amount of spending on biological computing (e.g. a pigouvian tax that is earmarked to solving the ethical problems present) would be what I would primarily advocate, at least from my European point of view.
In the regions where the data centers are being built campaigning for a moratorium or slowdown and generally raising public awareness sounds like something that should be attempted and it should be possibly to sell people on the ethical implications of brain-based computing...