Experienced quant trader, based in London. Formerly a volunteer at Rethink Priorities, where I did some forecasting research. Interested in most things, donations have been primarily to longtermism, animal welfare and meta causes.
Charles Dillon đ¸
âââPoor nations would suffer unintended consequences because they rely heavily on exports to rich countries â In fact, poor nations will benefit from it because as we internalize costs, theyâll get a fair salary and get compensated for environmental costs.âââ
This is, to put it mildly, implausible, and requires strong evidence IMO.
That you dismissed the most important issue with your claim so tersely without really engaging with it suggests you simply do not care very much about the effects on the global poor, which in a scenario without economic growth would I expect be much, much worse than the worst plausible effects of 3-4° warming (which I take to be the likely âbusiness as usualâ outcome). Citing Hickel here, a known bad faith/âdisingenuous actor in this area (see here) doesnât serve to provide much evidence either.
Hi Sofia. I agree that orgs should try to avoid relying on volunteer labor if they can, for the reasons you outline. I donât agree with your explanation for why the status quo is what it is.
I donât agree that âEA communityâs high use of volunteer labor shows that a lot of EAs donât relate to the average person in the world who is a couple of paychecks away from being homelessâ first of all because Iâm not clear on how high that use is, and secondly because the orgs who happen to be using volunteer labor may just be financially constrained. Just because thereâs a lot of money in EA doesnât necessarily mean those particular orgs have that money available to spend.
âFor example, most people in EA that I spoke to about me not being able to get a visa were surprised that this is even an issue and many people who organise EA-related events have made plans to make them more accessible to people from more countries.ââthis seems to support my point? Those organising the events make plans to make them accessible, i.e. are aware of the issue and taking some (though clearly not all possible) steps to mitigate difficulties for attendees.
That many people not involved in organising events donât know about all the difficulties potential attendees might have doesnât seem too important to me, though Iâm open to being corrected here? It seems a lot to expect everyone to be knowledgeable about this if itâs not directly related to their work.
âThe EA community has little awareness of their privilege.â
This strikes me as straightforwardly untrue, unless you are holding the community to a standard which nobody anywhere meets. The EA community exists largely because individuals recognised their outsized (i.e. privileged) position to do good in the world, given their relative access to resources compared to e.g. those in poverty and non-human animals, and strove to use that privilege for good.
That EA doesnât, e.g. make it as easy for you to go to EA conferences as it is for a Western citizen is not because EA doesnât know that some people have difficulty travelling. It is because doing that costs resources that have been allocated to something else. It might be a mistake not to use those resources to help you travel to a conference vs whatever the opportunity cost of that decision would be, but that is a very different question, I think.
There seems to me to be a fallacy here that assumes every action SBF takes needs to be justifiable on its first order EA merits.
The various stakes FTX have taken in crypto companies during this downturn are obviously not done in lieu of donationsâthey are business decisions, presumably done with the intention of making more money, as part of the process of making FTX a success. Whether they are good decisions in this light is hard for me to say, but Iâd be inclined to defer to FTX here.
No, I donât want to bet at this pointâIâm not interested in betting such a small amount, and donât want to take the credit risk inherent in betting a larger amount given the limited evidence Iâve got about your reliability.
This DM never occurred, FWIW, as of t+8.
I am skeptical of attempts to gatekeep here. E.g. I found Scoblicâs response to Samotsvetyâs forecast less persuasive than their post, and I am concerned here that âamateurishâ might just be being used as a scold because the numbers someone came up with are too low for someone elseâs liking, or they donât like putting numbers on things at all and feel it gives a false sense of precision.
That isnât to say this is the only criticism that has been made, but just to highlight one I found unpersuasive.
That seems like quite the bold prediction, depending on the operationalization of ânewâ and âeffective altruistâ.
I would give you 4-1 odds on this if we took ânewâ to mean folks not currently giving at scale using an EA framework and not deriving their wealth from FTX/âAlameda or Dustin Moskovitz, and require the donors to be (i) billionaires per Bloomberg/âForbes and (ii) giving >50m each to Effective Altruist aligned causes in the year 2027.
I wrote some similar questions mid last year, prior to FTX scaling up their giving, they could be used as a template:
https://ââwww.metaculus.com/ââquestions/ââ7340/âânew-megadonor-in-ea-in-2026/ââ
I think the thesis is plausible here, but it would be more credible and easier to discuss and act upon if you gave more precise predictions or confidence intervals (e.g. âI think with X% confidence there will be Y billionaires with an aggregate net worth of >Z, excluding Dustin Moskovitz and the FTX/â Alameda crew, in EA by 2027â).
Using the code I linked above, it should require only minor changes if the Metaculus prediction is in one of the time series in the data, which I guess it is? Probably for someone with good familiarity with the API it would be a matter of an hour or two, otherwise it might take a bit longer.
I unfortunately will not have time to do this anytime soon.
âPerfectly calibratedâ, not âperfectâ. So if all of their predictions were correct, I.e. 20% of their 20% predictions came true etc.
So in this case, someone making all 90% predictions will have an expected score of 0.9Ă0.1^2 + 0.1Ă0.9^2 =0.09, while someone making all 80% predictions will have an expected score of 0.8Ă0.2^2 + 0.2Ă0.8^2=0.16
In general a lower expected score means your typical prediction was more confident.
One thing to note here is it is plausible that your errors are not symmetric in expectation, if thereâs some bias towards phrasing questions one way or another (this could be something like frequently asking âwill [event] happenâ where optimism might cause you to be too high in general, for example). This might mean assuming linearity could be wrong.
This is probably easier for you to tell since you can see the underlying data.
Iâm using overconfident here to mean closer to extreme confidence (0 or 100, depending on whether they are below or above 50%, respectively) than they should be.
Minor point, but I disagree with the unqualified claim of being well calibrated here except for the 90% bucket, at least a little.
Weak evidence that you are overconfident in each of the 0-10, 10-20, 70-80, 80-90 and 90%+ buckets is decent evidence of an overconfidence bias overall, even if those errors are mostly individually within the margin of error.
Itâs hard to imagine him not being primarily seen as a crypto guy while heâs regularly going to Congress to talk about crypto, and lobbying for a particular regulatory regime. Gates managed this by not running Microsoft any more, it might take a similarly big change in circumstances to get there for SBF.
I donât think the actual dollar number he spends is that important here. Media coverage can be very scope insensitive, so it isnât obvious to me that $100m would be meaningfully different to $50m or $25m here.
I agree more legible altruistic acts would be good for PR, and contra Stefan I do think thereâs a case for focusing on this to an extent, but that doesnât mean just picking a big number out of a hat and spending it.
âgenerally really shitty salaries for researchers in the UKâ as a downside for Oxbridgeâthis seems like something any org hiring researchers can unilaterally fix, at least for their researchers?
I think it would not have been difficult for you to do a back of the envelope calculation for how many net makers would be out of business for each amount of nets distributed (a net maker can make X nets, coverage was Y% before AMF arrived). The lack of even a bare bones quantitative case reinforces my prior that this is very unlikely to be a significant issue.
Do you have a back of the envelope calculation for the expected impact of e.g. a marginal USD 10,000?