Data analyst at a consulting firm, previously ran an EA university group.
Rebecca
Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
Factual note: Rory Stewart isn’t a co-founder of GD, he is/was a later stage employee
Are you sure it’s not the other possible candidate? I have only heard negative things about one of their personalities.
Was that lying or misremembering though? Lying is a fairly big accusation to make.
The Wired article says that there’s been a bunch more research in recent years about the effects of bed nets on fish stocks, so I would consider the GiveWell response out of date
I don’t think it can be separated neatly. If the person who has died as a result of the charity’s existence is a recipient of a disease reduction intervention, then they may well have died from the disease instead if not for the intervention.
-
What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number?
-
Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse—defining terms unique is very common.
-
Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional?
WHILE SBF’S MONEY was still coming in, EA greatly expanded its recruitment of college students. GiveWell’s Karnofsky moved to an EA philanthropy that gives out hundreds of millions of dollars a year and staffed up institutes with portentous names like Global Priorities and The Future of Humanity. Effective altruism started to synergize with adjacent subcultures, like the transhumanists (wannabe cyborgs) and the rationalists (think “Mensa with orgies”). EAs filled the board of one of the Big Tech companies
Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?
Even if the thing you gave a 57 percent chance of happening never happens, you can still claim you were right.
-
I don’t think you incorporate the number at face value, but plausibly you do factor it in in some capacity, given the level of detail GiveWell goes into for other factors
I am very surprised to read that GiveWell doesn’t at all try to factor in deaths caused by the charities when calculating lives saved. I don’t agree that you need a separate number for lives lost as for lives saved, but I had always implicitly assumed that ‘lives saved’ was a net calculation.
The rest of the post is moderately misleading though (e.g. saying that Holden didn’t start working at Open Phil, and the EA-aligned OpenAI board members didn’t take their positions, until after FTXFF had launched).
We don’t know from this announcement that they are planning to prioritise rapidity of sale over time-adjusted return—it could still make sense to not continue e.g. paying as many salaries, and to have declared it shut down as a project.
That wasn’t my interpretation of this section. I took “be smart” to mean like ‘make smart career decisions’, not ‘be Smart^TM’
Regarding your last paragraph, I see the Profile 1 vs Profile 2 axis as basically distinct from the Doer vs Thinker axis. People can spend years in large companies without ever needing or developing a get sh*t done mentality, and otoh starting an EA org and rapidly iterating can be a great way to develop or exercise that skill (see e.g. BlueDot Impact, AI-Plans.com). Maybe it’s that you’re leaving out a Profile 3 - people who start their career in (or very quickly switch into) EA but by starting a new thing rather than working their way up the ladder of an EA org. (Though the starting of a new thing could technically happen within an existing org as well).
I’d be quite interested in reading a more fleshed-out version of this, if you were considering whether that was worth your time. What dimensions of advice about a given career path are you seeing people given that should be discounted without domain success?
All CE charities to date have focused on global development or animal welfare
CE incubated Training for Good, which runs two AI-related fellowships. They didn’t start out with an AI focus, but they also didn’t start out with a GHD or animal welfare focus.
I didn’t vote, but I’d guess that people are trying to discourage politicisation on the forum?
This feels like it could just be a genre of Quick Takes that people may choose to post?
Saying it isn’t an EA project seems too strong—another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and ‘coaching for EA leaders’ programs. TfG was incubated by Charity Entrepreneurship.
How are people just letting him get away with a victim narrative?
2 worked well for me I think
I took that second quote to mean ‘even if Sam is dodgy it’s still good to publicly back him’