I’m an artist, writer, and human being.
To be a little more precise: I make video games, edit Wikipedia, and write here and on LessWrong!
I’m an artist, writer, and human being.
To be a little more precise: I make video games, edit Wikipedia, and write here and on LessWrong!
Thanks for this; it’s a nicely compact summary of a really messy situation that I can quickly share if necessary.
This is a fair critique imo, I’m updating against SBF using EA for sociopathic reasons. That being said, only slightly updating towards him using EA ideology as his main motivator to commit fraud, as that still may very well not be the case.
my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice
Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.
This is really interesting—thanks for sharing!
Quick note that I misread “refuges” as “refugees,” and got really confused. In case anyone else made the same mistake, this post is talking about bunkers, not immigrants ;)
Do we know how much impact Sam Bankman-Fried‘s personal philosophy is going to have on FTX’s grant-making choices? This is a lot of financial power for a single organization to have, so I expect the makeup of the core team to have an outsized effect on the rest of the movement.
+1 on this. It is painfully clear that we need to radically improve our practices relating to due diligence moving forward.
My brother was recently very freaked out when I asked him to pose a set of questions that he thinks an AI wouldn’t be able to answer, and GPT-3 gave excellent-sounding responses to his prompts.
I would strongly support doing this—I have strong roots in the artistic world, and there are many extremely talented artists online that I think could potentially be of value to EA.
How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The reason why that’s normally negative is due to that money then not being used in a more cost-effective manner, but if our primary problem is spending enough money in the first place, that may not be much of an issue at all.
There is a very severe potential downside if many funders think in this manner, which is that it will discourage people from writing about potentially important ideas. I’m strongly in favor of putting more effort and funding into PR (disclaimer that I’ve worked in media relations in the past), but if we refuse to fund people with diverse, potentially provocative takes, that’s not a worthwhile trade-off, imo. I want EA to be capable of supporting an intellectual environment where we can ask about and discuss hard questions publicly without worrying about being excluded as a result. If that means bad-faith journalists have slightly more material to work with, than so be it.
I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?
As a singular data point, I’ll submit that until reading this article, I was under the impression that the Orthogonality thesis is the main reason why researchers are concerned.
This is hilarious; I was literally thinking yesterday that we should be reaching out to the Orthodox/Modern Orthodox Jewish community, and was going to write a post on that today! Happy to know this already exists :)
May I ask what your long-term plans are?
+1 here as well, frugality option would be an amazing thing to normalize, especially if we can get it going as a thing beyond the world of EA (which may be possible if we get some good reporting on it).
Came across this post today—I assume the bounty has been long-closed by now?
This is really exciting! I’m glad there are so many talented people on the case, and hope the good news will only grow from here :)
Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons would impugn on our perceived integrity. The thing is, reputation is actually one of the things we are already paying a tremendous amount of attention to—in the context of both forecasting and charity evaluation. To explain:
In forecasting, if you want your predictions to be maximally accurate, it is highly worthwhile to see what domain experts and superforecasters are saying, since they either have a confirmed track record of getting predictions right, or a track record of contributing to the relevant field (which means they will likely have a more robust inside-view). In charity evaluation, the only thing we usually have to go on to determine the effectiveness of existing charities is what the charities themselves say about their impact, and if we’re very lucky, what outside researchers have evaluated. Ultimately, the only real reason we have to trust some people or organizations more than others is their track record (certifications are merely proxies for that). Organizations like GiveWell partially function as track-record evaluators, doing the hard parts of the work for us to determine if charities are actually doing what they say they’re doing (comparing effectiveness once that’s done is the other aspect of their job, of course). When dealing with longtermist charities, things get trickier. It’s impossible to evaluate a direct track record of impact, so the only thing we have to go on is proxies for effectiveness—is the charity structured well, do we trust the people working there, have they been effective at short-termist projects in the past, etc…evaluation becomes a semi-formalized game of trust.
The outside world cares about track record as much—if not significantly more than—we do. I do not think it would signal a lack of integrity for SBF to deliberately invest in short-term altruistic projects which can establish a positive track record, showing that not only does he sincerely want to make the world a better place, he knows how to actually go about doing that.
Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.
+1 from me.
I was talking about the whole situation with my parents, and they mentioned that their local synagogue experienced a very similar catastrophe, with the community’s largest funder turning out to be a con-man. Everybody impacted had a lot of soul-searching to do, but ultimately in retrospect, there was really nothing they could or should have done differently—it was a black-swan event that hasn’t repeated in the quarter of a century or so since it happened, and there were no obvious red flags until it was too late. Yes, we can always find details to agonize over, but ultimately, I doubt it will be very productive to change our whole modus operandi to prevent this particular black swan event from repeating (with a few notable exceptions).