Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help “neartermist” causes.
Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help “neartermist” causes.
Looks like the number is just for 2024, it doesn’t really say what the previous numbers were (e.g. before the FTX scandal when most attendees could be reimbursed for flights and accommodation).
Full disclosure: I was rejected from an EAG, in 2022 I think (after attending one the year before).
Having previously criticised the lack of transparency in the EAG admissions process, I’m happy to see this post. Strongly upvoted.
With all the scandals we’ve seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.
...some who didn’t want to be named would have not come if they needed to be on a public list, so barring such people seems silly...
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
The EA movement absolutely cannot carry on with the “let’s allow people to do whatever without any hindrance, what could possibly go wrong?” approach.
Just a reminder that I think it’s the wrong choice to allow attendees to leave their name off the published list.
I haven’t listened to that many episodes—in fact, of those you listed I’ve only listened to the one with Howie Lempel (which also resonated with me). But I think the episode I found most interesting is the one with Mushtaq Khan about effectively fighting corruption in developing countries.
I think it is irrelevant, and in every context where I’ve seen it presented as ‘on topic’ in EA, the connection between it and any positive impact was simplistic to the point of being imaginary, while at the same time promoting dangerous views—just like in the post you quoted.
As an Ashkenazi Jew myself, saying “we’d like to make everyone like Ashkenazi Jews” feels just like a mirror image of Nazism that very clearly should not appear on the forum
I’m an Israeli Jew and was initially very upset about the incident. I don’t remember the details, but I recall that in the end I was much less sure that there was anything left to be upset about. It took time but Tegmark did answer many questions posed about this.
Do you maybe want to voice your opinion of the methodology in a top level comment? I’m not qualified to judge myself and I think it’d be informative.
I downvoted and disagreevoted, though I waited until you replied to reassess.
I did so because I see absolutely no gain from doing this, I think the opportunity cost means it’s net negative, and I oppose the hype around prediction markets—it seems to me like the movement is obsessed with them but practically they haven’t led to any good impact.
Edit: regarding ‘noticing we are surprised’ - one would think this result is surprising, otherwise there’d be voices against the high amount of funding for EA conferences?
I admire the boldness of publishing a serious evaluation which shows a common EA intervention to have no significant effect (with all the caveats, of course).
What do you think can be gained from that?
Looking for people (probably from US/UK) to do donation swaps with. My local EA group currently allows tax-deductible donations to:
GiveWell—Top Charities Fund
Animal Charity Evaluators—Top Charities Fund
Against Malaria Foundation
Good Food Institute
<One other org that I don’t want to include here>
However, I would like to donate to the following:
GiveWell—All Grants Fund (~$1230)
GiveDirectly (~$820)
The Humane League (~$580)
If anyone is willing to donate these sums and have me donate an equal sum to one of the funds mentioned above—please contact me.
I think you’re mostly right, especially about LLMs and current hype (though I do think a couple innovations beyond current technology could get us AGI). but I want to point out that AI progress has not been entirely fruitless. The most salient example in my mind is AlphaFold which is actually used for research, drug discovery etc.
Thanks for correcting me. I do believe they’re much less involved in these things nowadays, but I might be wrong.
I indeed haven’t seen any expression of racism from either, but I chose carefully to write “racist/euginicist” before for this kind of reason exactly. I personally believe even discussing such interventions in the way that they have been in EA has risks (of promoting racist policies by individuals, organizations, governments) that far outweigh any benefits. Such a discussion might be possible privately between people who all know each other very well and can trust each other’s good intentions, but otherwise it is too dangerous.
I appreciate you sharing your experience. It’s different from mine and so it can be that I’m judging too many people too harshly based on this difference.
That said, I suspect that it’s not enough to have this aversion. The racism I often see requires a degree of indifference to the consequences of one’s actions and discourse, or maybe a strong naivety that makes one unaware of those consequences.
I know I can’t generalize from one person, but if you see yourself as an example of the different mindset that might lead to the behaviour I observed—notice that you yourself seem to be very aware of the consequences of your actions, and every bit of expression from you I’ve seen has been the opposite of what I’m condemning.
Edit: for those downvoting, I would appreciate feedback on this comment, either here or in a PM.
Maybe not consciously. Does that make it any better?
I don’t think the movement can be ascribed a stance on this. What I said, rather, is:
many EAs are racists/euginicists and want to have such opinions around them
And I stand behind this. They just aren’t the people responsible for the interventions you mentioned.
I somehow missed that 🤦🏼♂️.