I’m a student of moral science at the university of Ghent. I also started and ran EA Ghent from 2020 to 2024, at which point I quit in protest over the Manifest scandal (and the reactionary trend it highlighted). I now no longer consider myself an EA (but I’m still part of GWWC and EAA, and if the rationalists split off I’ll join again).
If you’re interested in philosophy and mechanism design, consider checking out my blog.
I co-started Effectief Geven (Belgian effective giving org), am a volunteer researcher at SatisfIA (AI-safety org) and a volunteer writer at GAIA (Animal welfare org).
Possible conflict of interests: I have never received money from EA, but could plausibly be biased in favor of the organizations I volunteer for.
Allowing anonymous predictions causes a whole bunch of other problems. But even we could somehow get rid of coordination mechanisms like: dominance assurance contracts, the fear of losing your social network, or psychological loyalty towards your ingroup, is it really in your own interest to lose the source of income for you and your children for a <51% chance of a one time payout, or will the outcomes of conditional prediction markets be biased towards the interests of rich people?