I’m an artist, writer, and human being.
To be a little more precise: I make video games, edit Wikipedia, and write here and on LessWrong!
I’m an artist, writer, and human being.
To be a little more precise: I make video games, edit Wikipedia, and write here and on LessWrong!
I didn’t focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.
Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”
Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons would impugn on our perceived integrity. The thing is, reputation is actually one of the things we are already paying a tremendous amount of attention to—in the context of both forecasting and charity evaluation. To explain:
In forecasting, if you want your predictions to be maximally accurate, it is highly worthwhile to see what domain experts and superforecasters are saying, since they either have a confirmed track record of getting predictions right, or a track record of contributing to the relevant field (which means they will likely have a more robust inside-view). In charity evaluation, the only thing we usually have to go on to determine the effectiveness of existing charities is what the charities themselves say about their impact, and if we’re very lucky, what outside researchers have evaluated. Ultimately, the only real reason we have to trust some people or organizations more than others is their track record (certifications are merely proxies for that). Organizations like GiveWell partially function as track-record evaluators, doing the hard parts of the work for us to determine if charities are actually doing what they say they’re doing (comparing effectiveness once that’s done is the other aspect of their job, of course). When dealing with longtermist charities, things get trickier. It’s impossible to evaluate a direct track record of impact, so the only thing we have to go on is proxies for effectiveness—is the charity structured well, do we trust the people working there, have they been effective at short-termist projects in the past, etc…evaluation becomes a semi-formalized game of trust.
The outside world cares about track record as much—if not significantly more than—we do. I do not think it would signal a lack of integrity for SBF to deliberately invest in short-term altruistic projects which can establish a positive track record, showing that not only does he sincerely want to make the world a better place, he knows how to actually go about doing that.
Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.
If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?
Excellent question! I wouldn’t, but only because of epistemic humility—I would probably end up consulting with as many philosophers as possible and see how close we can come to a consensus decision regarding what to practically do with the button.
I’m not sure if you’re still actively monitoring this post, but the Wikipedia page on the Lead-crime hypothesis (https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis) could badly use some infographics!! My favorite graph on the subject is this one (from https://news.sky.com/story/violent-crime-linked-to-levels-of-lead-in-air-10458451; I like it because it shows this isn’t just localized to one area), but I’m pretty sure it’s under copyright unfortunately.
Love this newsletter, thanks for making it :)
One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.
Thanks for the post—It was really amazing talking with you at the conference :)
We already know that we can create net positive lives for individuals
Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)
If you could push a button and all life in the universe would immediately, painlessly, and permanently halt, would you push it?
I think it’s okay to come off as a bit insulting in the name of better feedback, especially when you’re unlikely to be working with them long-term.
my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice
Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.
Ditto here :)
I would strongly support doing this—I have strong roots in the artistic world, and there are many extremely talented artists online that I think could potentially be of value to EA.
Fixing the Ozone Layer should provide a whole host of important insights here.
I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?