Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://www.centreforeffectivealtruism.org/careers
Ben_West
Thanks, that makes sense. I didn’t remember Going Infinite as having made such a strong claim, but maybe I was projecting my own knowledge into the book.
I looked back at the agenda for our resignation/buyout meeting and I don’t see anything like “didn’t disclose misplaced transfer money to investors”. Which doesn’t mean that no one had this concern, only that they didn’t add it to the agenda, but I do think it would be misleading to describe this as the central concern of the management team, given that we listed other things in the agenda instead of that.[1]
- ^
To preempt a question about what concerns I did have, if not the transfer thing: see my post from last year:
I thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.
I’m not sure if I would describe the above as a “benign management dispute” (it certainly didn’t feel benign to me at the time), but I think it’s even less accurate to describe it as being about the misplaced transfers
- ^
I would be excited about a common application. My sense is that the only reason it doesn’t exist is that no one has put the time in to create it; when I’ve talked to hiring managers, most were in favor of the project (though there are some concerns, e.g. the fact that applications are currently a costly signal is helpful for identifying the applicants who actually really want to apply).
Thanks for organizing the conference, the statement, and the resulting media coverage! Cool to see big names like Chalmers on the list.
I do not remember being entirely or even primarily motivated by that issue. I’m not sure where Matt is getting this from, though in his defense he’s writing pretty flippantly.
Animal Justice Appreciation Note
Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario’s ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it.
Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
Thanks! That’s helpful. In particular, I wasn’t tracking the 2021 versus 2022 thing.
predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) losing all of their money,
Do you know if this person made any money off of this prediction? I know that shorting cryptocurrency is challenging, and maybe the annual fee from taking the short side of a perpetual future would be larger than 10%, not sure, but surely once the FTX balance sheet started circulating that should have increased the odds that the collapse would happen on a short time scale enough for this trade to be profitable?[1]
- ↩︎
I feel like I asked you this before but I forgot the answer, sorry.
- ↩︎
They perhaps shouldn’t be interviewed on popular EA podcasts like 80,000 Hours (as far as I can tell Moskovitz or Tuna have never been on)
I personally would be pretty interested to hear an interview with Moskovitz, Tuna, or Buterin and would feel sad if 80k felt prohibited from talking to them. I don’t remember being that excited about Buterin’s 2019 interview (I recall it mostly being about block chain stuff which I wasn’t that interested in), so I guess that’s some sign that prohibiting interviews with him wouldn’t cost that much, but I’m interested to hear some of his answers to these questions.
I do expect on priors that there is a decent chance that Buterin will be revealed to have committed some type of serious misconduct, and if that does happen I wouldn’t be surprised to see a headline like “yet another EA billionaire is a criminal.” A blanket prohibition on inviting him to the 80k podcast feels like throwing the baby out with the bath water though.
A thing that would update me here is evidence that engagement with a community/set of ideas by billionaires is on expectation negative. My sense is that EA’s involvement with SBF was toward the tail of the distribution of how bad engagement with billionaires goes, but I could be wrong about that, and if it is closer to the median case then a blanket prohibition feels more warranted.
Thanks! I appreciate the concrete suggestions.
Thanks for writing this! I would find it helpful if you taboo’d EA should. e.g. “Specific recommendation: don’t allow a billionaire to become a ‘face’ of EA”—what specifically should have been done differently?
E.g. My recollection from Going Infinite is that the billboards you criticize weren’t even endorsed by Sam, they were done by some marketing agency who was summarily fired after Sam realized what they had done. And, like you say, they didn’t contain the phrase “effective altruism” or something where anyone could plausibly be said to have a trademark. So what mechanism are you imagining which could have prevented them from going up?
I think there are steps which could be taken to limit people’s ability to identify as EAs. For example: CEA could exercise authoritarian control over the effective altruism trademark and sue anyone who self describes as an EA without jumping through whatever hoops we put in place. I think this is not a crazy idea, but it has clear downsides, and I’m not sure if this is actually what you are suggesting.
Yes, I responded to it here
Marcus Daniell appreciation note
@Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It’s cool to see this, and I’m wishing him luck for his final year of professional play!
Thanks for writing this up! Dumb question: why can’t you just directly see if mantids have nociceptors? Are they hard to detect?
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
- ^
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
- ^
Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.
- ^
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
Oh good point! That does seem to increase the urgency of this. I’d be interested to hear if CE/AIM had any thoughts on the subject.
Interesting! I’m glad I wrote this then.
Do you think “[doing an investigation is] one of the things that would have the most potential to give rise to something better here” because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?”
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as “clear evidence of fraud”; describing something as a “rumor” seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was “hope the executives hear rumors of malfeasance,” I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be “see if the executives could have updated from rumors better.” I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard “SBF is often overconfident” or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don’t need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/finding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from “prominent individuals” requires the individuals in question to do something so onerous that no one is willing to become “prominent.” I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it’s been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think I’m weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don’t expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
- ^
Thanks to Chana Messinger for this point
- ^
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I’m not going to address it here.
- ^
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I’m not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
- ^
Part of why I haven’t prioritized this is that there aren’t a lot of earning to give companies anymore, but I think it’s still potentially worth someone spending time on this
- ^
I have done my own version of this, but my sense is that people (very reasonably) would prefer to hear from someone like Will
Thanks for doing this! You say “My own impression (quite low-confidence!) is that spending on EA focus areas like technologies such as far-UVC, synthesis screening, and GCBR-specific concerns is likely dominated by EA” and I’m trying to figure out precisely how dominant EA is.
You say “Therefore, I would guess it is highly unlikely that philanthropic spending on technologies such as far-UVC, preventing bioterrorism, synthesis screening, and regulating dual-use research of concern represent more than 5% of the total biosecurity spend.” And also EA funding is ~4% of total biosecurity spend. Can we conclude from this that EA is likely >80% of GCBR-specific funding?
First in-ovo sexing in the US
Egg Innovations announced that they are “on track to adopt the technology in early 2025.” Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this.
UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn’t keep that commitment. But better late than never!
Congrats to everyone working on this, including @Robert—Innovate Animal Ag, who founded an organization devoted to pushing this technology.[1]
Egg Innovations says they can’t disclose details about who they are working with for NDA reasons; if anyone has more information about who deserves credit for this, please comment!