What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number?
Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse—defining terms unique is very common.
Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional?
WHILE SBF’S MONEY was still coming in, EA greatly expanded its recruitment of college students. GiveWell’s Karnofsky moved to an EA philanthropy that gives out hundreds of millions of dollars a year and staffed up institutes with portentous names like Global Priorities and The Future of Humanity. Effective altruism started to synergize with adjacent subcultures, like the transhumanists (wannabe cyborgs) and the rationalists (think “Mensa with orgies”). EAs filled the board of one of the Big Tech companies
Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?
Even if the thing you gave a 57 percent chance of happening never happens, you can still claim you were right.
What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number?
Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse—defining terms unique is very common.
Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional?
Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?