It’s fair enough to feel betrayed in this situation, and to speak that out.
But given your position in the EA community, I think it’s much more important to put effort towards giving context on your role in this saga.
Some jumping-off points:
Did you consider yourself to be in a mentor / mentee relationship with SBF prior to the founding of FTX? What was the depth and cadence of that relationship?
e.g. from this Sequoia profile (archived as they recently pulled it from their site):
“The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.
… And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.””
What diligence did you / your team do on FTX before agreeing to join the Future Fund as an advisor?
[Edited to add: Were you aware of the 2018 dispute at Alameda re: SBF’s leadership? If so, how did this context factor into your decision to join the Future Fund?]
Did you have visibility into where money earmarked for Future Fund grants was being held?
Did you understand the mechanism by which FTX claimed to be generating revenue? Were the revenues they reported sanity-checked against a back-of-the-envelope estimate of how much their claimed mechanism would be able to generate?
What were your responsibilities at the Future Fund? How often were you in contact with SBF and other members of FTX leadership in your role as an advisor?
[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.]
One additional thing I’d be curious about:
You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and there has—to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.
What was the reasoning behind the decision to support SBF by connecting him to Musk?
How many people from FTXFF or EA at large were consulted to figure out if that was a good idea?
Do you think that it still made sense at the point you helped with the potential acquisition to regard most of the wealth of SBF as EA resources? If not, why did you not inform the EA community?
It could have easily been the most consequential thing EAs have ever done and there has—to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.
I don’t think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don’t benefit a ton from “the whole community weighs in” over “twenty smart advisors weighed in”.
But regardless, seems interesting and useful for EAs to debate this topic so we can form more models of this part of the strategy space—maybe we should be doing more to positively affect the world’s public fora. And I’d personally love to know more about Will’s reasoning re Twitter.
often decisions don’t benefit a ton from “the whole community weighs in” over “twenty smart advisors weighed in”.
I don’t think this is true? Especially for decisions in the billions of dollars? Why do you think 20 smart advisors can spot all the problems that thousands of community members will, or even the major ones?
For nearly a decade now, we’ve been putting a huge amount of work into putting the details of our reasoning out in public, and yet I am hard-pressed to think of cases (especially in more recent years) where a public comment from an unexpected source raised novel important considerations, leading to a change in views. This isn’t because nobody has raised novel important considerations, and it certainly isn’t because we haven’t changed our views. Rather, it seems to be the case that we get a large amount of valuable and important criticism from a relatively small number of highly engaged, highly informed people. Such people tend to spend a lot of time reading, thinking and writing about relevant topics, to follow our work closely, and to have a great deal of context. They also tend to be people who form relationships of some sort with us beyond public discourse.
The feedback and questions we get from outside of this set of people are often reasonable but familiar, seemingly unreasonable, or difficult for us to make sense of. [...]
Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits—more below). I’m aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public discourse.
Did you understand the mechanism by which FTX claimed to be generating revenue? Were the revenues they reported sanity-checked against a back-of-the-envelope estimate of how much their claimed mechanism would be able to generate?
I think it’s important to note that many experts, traders, and investors did not see this coming, or they could have saved/made billions.
It seems very unfair to ask fund recipients to significantly outperform the market and most experts, while having access to way less information.
Lorenzo, I agree the expert traders and investor have more technical skills about investment. But it seems to me that MacAskill and FTX Future Fund board had more direct information about the personality of SBF and the personal connections among the leaders and the group dynamics. So, when it comes to your statement “having access to way less information”, I don’t think this is the case.
I’ll go with you part of the way, but I also think that experts, traders, and even investors were further from SBF than at least some of the people in the equation here, which seems more and more true the more accounts I hear about people from Alameda saying they warned top brass. I wouldn’t expect an investor to have that kind of insight.
I think it’s good practice to try to understand a project’s business model and try to independently verify the implications of that model before joining the project.
My understanding is that FTX’s business model fairly straightforwardly made sense? It was an exchange, and there are many exchanges in the world that are successful and probably not fraudulent businesses (even in crypto—Binance, Coinbase, etc). As far as I can tell, the fraud was due to supporting specific failures of Alameda due to bad decisions, but wasn’t inherent to FTX making any money at all?
I agree that it is less likely than Binance, based on the fact that public stock market companies are required to be more transparent[1], I do not know much about these particular companies.
This seems to be “not even wrong”—FTX’s business model isn’t and never was in question. The issue is Sam committing fraud and misappropriating customer funds, and there being a total lack of internal controls at FTX that made this possible.
If you say that your business model is to hold depositor funds 1:1 and earn money from fees, but in fact you sometimes earn money via making trades with depositor funds, then you would be misrepresenting your business model.
My current best guess is that WM quite reasonably understood FTX to be a crypto exchange with a legitimate business model earning money from fees—just like the rest of the world also thought. The fact that FTX was making trades with depositor funds was very likely to be a closely kept secret that no one at FTX was likely to disclose to an outsider. Why the hell would they—it’s pretty shady business!
Are you saying WM should have demanded to see proof that FTX’s money was being earned legitimately, even if he didn’t have any reason to believe it might not be? This seems to me like hindsight bias. To give an analogy—have you ever asked an employer of yours for proof that their activities aren’t fraudulent?
Not disagreeing with your overall point, but if my non-EA aligned, low-level crypto trader friend is any indication, then there certainly was reason to believe that SBF was at the very least doing some shady things. In August, I asked this friend for his thoughts on SBF, and this is what he replied:
“He’s obviously super smart but comes across a bit evil while trying to portray the good guy front. His exchange is notorious for liquidating user positions, listing shit coins thats prices trend to zero. He also founded Alameda research (trading / market maker firm) alongside FTX (the exchange). Alameda are one of the biggest crypto trading firms with predatory reputation. There’s also the issue of barely any divide between the exchange and the trading firm so alameda likely sees a lot of exchange data that gives them an edge trading on FTX vs other users.”
The irony is that this friend lost most of his savings because he was a FTX user.
Indeed, even by 2019, anyone who took a cursory look at Alameda’s materials would have known that they were engaged in Ponzi schemes or something equally fraudulent.
As you can see in that Twitter thread, Alameda was promising a guaranteed 15% rate of return on investments, with “no downside.” This is impossible. Only the likes of Bernie Madoff would pretend to have a risk-free return at that level. And the materials look so amateurish (with a number of grammatical errors) that the person posting on Twitter originally thought it was “so egregious my first thought was that it was fake.”
This was fairly close to the time of Alameda’s founding. As of 2019, how much were CEA folks (including Will and Tara) involved with Alameda’s obvious fraud?
Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I’ll leave it to them to share more if/when they want to, but I think it’s fair to say they left at least in part due to concerns about Sam’s business ethics. She’s had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam’s actions are used to tarnish Tara.
Strong agree, but in that case, it seems very unlikely that Will was unaware of these serious “concerns about Sam’s business ethics” back in 2018, and it seems all the more incumbent on him to offer an explanation as to why he kept such a close affiliation with SBF thereafter.
The returns shown in the document are not indicative of fraud—those sorts of returns are very possible when skilled traders deploy short-term trading strategies in inefficient markets, which crypto markets surely were at the time. The default risk when borrowing at 15% might have been very low, but not zero as they suggested. The “no downside” characterization should have been caught by a lawyer, and was misleading.
Nobody with an understanding of trading would have [EDIT] I would not have concluded they were engaged in Ponzi schemes or were misrepresenting their returns based on the document. There are plenty of sloppy, overoptimistic startup pitch decks out there, but most of the authors of those decks are not future Theranoses.
Good points, Brian . . . I’m sure there are lots of overoptimistic pitch decks, and that a 15% return might be feasible, and maybe I’m just looking at this with the benefit of hindsight.
Even so, an investment firm normally doesn’t do anything like this, right? I mean, I assume that even Renaissance Technologies wouldn’t want to offer one single investment opportunity packaged as a loan with a legally guaranteed 15% rate of return with “no downside.” https://www.bloomberg.com/news/articles/2021-02-10/simons-makes-billions-while-renaissance-investors-fume-at-losses#xj4y7vzkg They might brag about their past returns, but would include lots of verbiage about the risks, and about how past performance is no guarantee of future returns, etc.
Thanks for your reply. I do think it would be unusual to see such promises, particularly from a firm looking for large investments. And I would expect to see a bunch of disclaimers, as you suggest. There might have been such language in the actual investment documents, but still. The excerpt shared on Twitter would have set off red flags for me because it seems sloppy and unprofessional, and it would have made me particularly concerned about their risk management, but I wouldn’t have concluded it was a Ponzi scheme or that there was something fraudulent going on with the reported returns.
It will be interesting to see if all of the FTX/Alameda fraud (if there was fraud, which seems very likely) took place after the most recent investment round. Investors may have failed not in financial diligence but in ensuring appropriate governance and controls (and, apparently, in assessing the character of FTX’s leadership).
One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.
For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Alameda Research and FTX (seeexamplessuchasthese), if this relationship were known to be hidden from investors and other stakeholders, should this not have raised red flags?
I believe that, even in the face of this particular disaster, who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like “maybe don’t fuck your direct report” or “if you’re recommending somebody for a grant, whom you have fucked, you ought to disclose this fact to the grantor” or “Notice when somebody in a position of power seems to be leaving behind a long trail of unhappy people they’ve fucked”, plus of course everything that shades over into harrassment, assault, and exploitation—none of which are being suggested here.
Outside of that, there’s a heck of a lot of people in this world fucking a heck of a lot of other people; most people who are fucking don’t blow up depository institutions; and controls and diligence on depository institutions should achieve reliability by some avenue other than checking which people are fucking. And I consider it generally harmful for a community to think that it has a right to pass judgment on fucking that is not like really clearly violating deontology. That’s not something that community members owe to a community.
Alameda’s position as a major market maker on FTX, profiting on the spread between buying and selling prices, puts it in a position to have a potential conflict of interest with FTX, which gets its revenue from transaction fees and margin loans to traders.
And while the firm’s executives, and Bankman-Fried, say there is a strong firewall between the two — something that, Bloomberg correctly notes, no one has actually said or even suggested has been breached — concerns about the potential for such conflicts of interest are growing as the size and activity of Alameda gains more attention.
Are you trying to suggest that when two firms need to be at arms-length because of the potential for an enormous conflict of interest, it wouldn’t matter if the two firms’ chief executives were dating each other?
I’m saying that if your clearance process is unable to tell whether or not two firms are arms-length, when they have a great deal to potentially gain from illegally interoperating, without the further piece of info about whether the CEOs are dating, you’re screwed. This is like trying to fix the liar loan problem during the mortgage meltdown by asking whether the loan issuer is dating the loan recipient. The problem is not that, besides the profit motive, two people might also be fond of each other and that’s terrible; the problem is if your screening process isn’t enough to counterbalance the profit motive. A screening process that can make sure two firms aren’t colluding to illegally profit should not then break down if the CEOs go on a date.
Or to put it more compactly and specifically: Given the potential energy between Alameda and FTX as firms, not to mention their other visible degrees of prior entanglement, you’d have to be nuts to rely on an assurance process that made a big deal about whether or not the CEOs were dating.
Maybe even more compactly: Any time two firms could gain a lot of financial free energy by colluding, just pretend you’ve been told their CEOs are dating, okay, and now ask what assurances or tests you want to run past that point.
...I think there must be some basic element of my security mindset that isn’t being shared with voters here (if they’re not just a voting ring, a possibility that somebody else raised in comments), and I’m at somewhat of a loss for what it could be exactly. We’re definitely not operating in the same frame here; the things I’m saying here sure feel like obvious good practices from inside my frame.
Taking prurient interest in other people’s sex lives, trying to regulate them as you deem moral, is a classic easy-mode-to-fall-into of pontificating within your tribe, but it seems like an absurd pillar on which to rest the verification that two finance companies are not intermingling their interests. Being like “Oh gosh SBF and Caroline were dating, how improper” seems like finding this one distracting thing to jump on… which would super not be a key element of any correctly designed corporate assurance process about anything? You’d go audit their books and ask for proofs about crypto cold storage, not demand that somebody’s romance be a dark secret that nobody got to hear about?
We sure are working in different frames here, and I don’t understand the voters’ (if they’re not just a voting ring).
I work (indirectly) in financial risk management. Paying special attention to special categories of risk—like romantic relationships—is very fundamental to risk management. It is not that institutions are face with a binary choice of ‘manage risk’ or ‘don’t manage risk’ where people in romantic relationships are ‘managed’ and everyone else is ‘not’. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:
Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other’s behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.
Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your romantic partner embarrassment
Romantic relationships imply a different level of access than financial relationships. People in romantic relationships have levers to make their partner do things they might not want to—for example abusive relationships, threatening to end the relationship unless X is done, watching the partner enter their password into their computer to gain access to systems.
So if I were writing these rules, I might very well rephrase it as “do you have a very strong friendship with this other person” and “do you occasionally spend time at each other’s houses” to avoid both allonormativity and the temptation to prurient sniffing; and I’d work hard to keep any disclosed information of that form private, like “don’t store in Internet-connected devices or preferably on computers at all” private, to minimize incentives against honest disclosure. And even then, I might expect that among the consequences of the regulation, would be that CEOs in relationships would occasionally just lie to me about it, now that such incentives had been established against disclosure.
When you optimize against visible correlates of possible malfeasance, you optimize first and above all against visibility; and maybe secondarily against possible malfeasance if the visibility is very reliable and the correlations are strong enough to take causal leaning on them.
But, sure, if you know all that and you understand the consequences, then Sequoia could’ve asked if SBF and Caroline were in a relationship, understanding that a No answer might be a lie given the incentives they’d established, and that a Yes answer indicated unusual honesty.
I don’t really understand why you are describing this as a hypothetical (“If I were writing these rules...”). You are the founder and head of a highly visible EA organisation recieving charitable money from donors, and presumably have some set of policies in place to prevent staff at that organisation from systematically defrauding those donors behind your back. You have written those policies (or delegated someone else to write them for you). You are sufficiently visible in the EA space that your views on financial probity materially affect the state of EA discourse. What you are telling me is that the policies which you wrote don’t include a ‘no undeclared sexual relationships with people who are supposed to act as a check on you defrauding MIRI’ rule, based on your view that it is excessively paternalistic to inquire about people’s sex life when assessing risk, and that your view is that this is the position that should be adopted in EA spaces generally.
This is—to put it mildly—not the view of the vast majority of organisations which handle money at any significant scale. No sane risk management approach would equate a romantic relationship with ‘a very strong friendship’. Romantic love is qualitatively different to fraternal love. No sane risk management approach would equate “occasionally spend[ing] time at each other’s house” to living together. My wife is often alone in the house for extended periods of time, but I usually hang out with friends when they come over (to give just one difference from an enormous list of possibilities).
EA leadership—which includes you—has clearly made a catastrophic error of financial risk management with this situation. The extent to which they are ‘responsible’ is a fair debate, but it is unquestionable they failed to protect people who trusted them to steer EA into the future—hundreds of people have been made unemployed overnight and EA is potentially facing its first existential risk as a result. I am genuinely baffled how you can look at this situation and conclude that the approach you are describing—a very intelligent non-expert such as yourself creates their own standards of financial conduct at significant odds with the mainstream accepted approach—could still possibly be appropriate in the face of the magnitude of the error this thinking has led to.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
Somebody else in that thread was preemptively yelling “vote manipulation!” and “voting ring!”, and as much as it sounds recursively strange, this plus some voting patterns (early upvotes, then suddenly huge amounts of sudden downvoting) did lead me to suspect that the poster in question was running a bunch of fake accounts and voting with them.
We would in fact be concerned if it turned out that two people who were supposed to have independent eyes on the books were in a relationship and didn’t tell us! And we’d try to predictably conduct ourselves in such a mature, adult, understanding, and non-pearl-clutching fashion that it would be completely safe for those two people to tell the MIRI Board, “Hey, we’ve fallen in love, you need to take auditing responsibility off one of us and move it to somebody else” and have us respond to that in a completely routine, nonthreatening, and unexcited way that created no financial or reputational penalties for us being told about it.
That’s what I think is the healthy, beneficial, and actually useful for minimizing actual fraud in real life culture, of which I do think present EA has some, and which I think is being threatened by performative indignation.
I’m struggling to follow your argument here. What you describe as the situation at MIRI is basically standard risk management approach—if two people create a risk to MIRI’s financial security processes by falling in love, you make sure that neither signs off on risk taken by the other.
But in this thread you are responding with strong disagreement to a comment which says “if this relationship [between SBF and Caroline] were known to be hidden from investors and other stakeholders, should this not have raised red flags?”. You said “who EAs are fucking is none of EA’s business”, amongst other comments of a similar tone.
I don’t understand what exactly you disagree with if you agree SBF and Caroline should have disclosed their relationship so that proper steps could be taken to de-risk their interactions (as would happen at MIRI). It seems that you do agree it matters who EAs are fucking in contexts like this? And therefore that it is relevant to know whether Will MacAskill knew about the undisclosed relationship?
You could plausibly claim it gets disclosed to Sequoia Capital, if SC has shown themselves worthy of being trusted with information like that and responding to it in a sensible fashion eg with more thorough audits. Disclosing to FTX Future Fund seems like a much weirder case, unless FTX Future Fund is auditing FTX’s books well enough that they’d have any hope of detecting fraud—otherwise, what is FTXFF supposed to do with that information?
EA generally thinking that it has a right to know who its celebrity donors are fucking strikes me as incredibly unhealthy.
I think we might be straying from the main point a bit; nobody is proposing a general right to peer into EA sex lives, and I agree that would be unhealthy.
There are some relatively straightforward financial risk management principles which msinstream orgs have been successfully using for decades. You seem to believe one of the pillars of these principles—surfacing risk due to romantic entanglements between parties—shouldn’t apply to EA, and instead some sort of ‘commonsense’ approach should prevail instead (inverted commas because I think the standard way is basically common sense too).
But I don’t understand where your confidence that you’re right here is coming from—EA leadership has just materially failed to protect EA membership from bad actor risk stemming at least in part from a hidden conflict of interest due to a romantic entanglement. EA leadership has been given an opportunity to run risk management their way, and the result is that EA is now associated with the biggest crypto fraud in history. Surely the Bayesian update here is that there are strong reasons to believe mainstream finance had it approximately right?
Rereading the above, I think I might just be unproductively repeating myself at this point, so I’ll duck out of the discussion. I appreciated the respectful back-and-forth, especially considering parts of what I was saying were (unavoidably) pretty close to personal attacks on you and the EA leadership more broadly. Hope you had a pleasant evening too!
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was “hidden from investors and other stakeholders.”
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
I couldn’t quite bottom out exactly what EY was saying, but I’m pretty sure it wasn’t that. On your interpretation, EY said, “who EAs are fucking is none of [wider] EA’s business [except people who are directly affected by the COI]”. But he goes on to clarify “There are very limited exceptions to this rule like ‘maybe don’t fuck your direct report’ ”. If that’s an exception to the rule of EAs fucking being only of interest to directly affected parties, then it mean EY thinks an EA having sex with a subordinate should be broadcast to the entire community. That’s a very strict standard (although I guess not crazy—just odd that EY was presenting it as a more relaxed / less prurient standard than conventional financial risk management).
It also doesn’t address my core objection, which is that EA leadership failed very badly to implement proper financial risk management processes. Generally my point was that EA leadership should be epistemically humble now and just implement the risk management processes that work for banks, rather than tinkering around and introducing their own version of these systems. Regardless of what EY meant, unless he meant ‘We should hire in PWC to implement the same financial controls as every Fortune company’ then he is making exactly the same mistake EA leadership made with FTX—assuming that they could create better risk management from first principles than the mainstream system could from actual experience
By the way, I disagree with the objective position here too. Every FTX investor needed to know about the COI and the management strategy FTX adopted in order to assess their risk exposure. This would be the standard at a conventional company (if the company knew about such a blatant COI from their CEO and didn’t tell investors at a conventional company then their risk officers would potentially be liable for the fraud too, iirc)
What’s disappointing is not that Eliezer can’t make even a minor acknowledgement for the relevance of the models or experiences of others, that he is probably outright wrong on the substantial issues, but that Eliezer struggles to communicate and hold a thread in this conversation.
His counterpart is a literal domain expert and maybe very valuable talent to EA. (As a statement considering the totality of the votes and writing) this person is being badgered under what to any outsider should be the scary or unclear norms and power structures of the EA community on its own forum, while Eliezer’s de facto community keeps him afloat.
Elizier’s behavior is unacceptable for a funded, junior community builder, much less a senior leader.
Imagine a newcomer witnessing this, much less experiencing this.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
I agree that it does not seem likely that there was a manipulation here with the votes (I casted strong-disagreement-vote on multiple comments by Eliezer on this page). But concerns about potential voting manipulation on this forum are reasonable by default, considering that it’s an open platform in which it’s technically possible for someone to vote from multiple anonymous accounts.
Fair enough, as long as that’s the standard that is applied to all commenters and not just EA leadership. I appreciate EY agreed that there was likely no manipulation after I pointed this out
It is extremely inaccurate to characterise the relationship between Bankman-Fried and Ellison as merely “dating”, and that people are merely saying that this was “improper”.
Bankman-Fried and Ellison were living together in a shared apartment. An unverified suggestion in one article suggests that Ellison may have been in some form of relationship with other residents of the apartment—residents who included the CTO and Director of Engineering of FTX.
If true, this is a genuinely alarming piece of information that would very obviously have caused anybody to question whether they should have placed their funds in the care of this specific group of people. However this information was not made public, and that lack of transparency is where the problem lies.
This statement is incredibly out of touch Eliezer. If CEO #1 and CEO #2 are in a romantic relationship, there is a clear conflict of interest here, especially when not disclosed to the public. In agreement with Anonymous, I also strongly oppose the language you’re using. I also agree with their comments regarding romantic relationships in the workplace. My general stance is 0 tolerance for workplace romance because it’s messy and there are far too many power dynamics at play.
Conflict of interest is the issue my friend. Unbiased decisions cannot happen when one has an other-than-work-relationship with the persons they are dealing with.
Any relationship that is, or appears to be, not in the best interest of the organization. A conflict of interest would prejudice an individual’s ability to perform his or her duties and responsibilities objectively.
IIA Standards Glossary of Terms, Conflicts of Interest.
...You think it’s important to disclose this conflict of interest when you recommend a grant to someone, but not important when you as a CEO decide on a multi-billion dollar loan to the company where the other person is the CEO?
Disclose to who? The loan was blatantly bad; nobody in a position to take that disclosure should’ve given two flying floops whether the CEOs were dating or not.
I believe that, even in the face of this particular disaster, who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like “maybe don’t fuck your direct report” or [...]
Is the word “maybe” here just a style of writing? Should the EA community tolerate some cases in which person A is having sex with a person B who is a direct report of A (in an EA org)?
I realize that inserting hedge words can allow one to publish things using much less time and energy (which can be a very good reason to insert hedge words).
Because as somebody who could potentially be mistaken for a Leader I want to be pretty derned careful about laying down laws to regulate other people’s sexuality; and while something like that would definitely be a red flag at, like, idk, CEA or MIRI, maybe it’s different if we’re talking about a 3-person startup. Maybe you’ll say it’s still ill-advised, but I don’t know their circumstances and there’s also a difference between ill-advised and Forbidden. I feel a lot more comfortable leaving out the ‘maybe’ when I pontificate my legislation about informing a donor that your recommended grantee is one with whom you’ve had a relationship—though even there, come to think, I’m relying on all the donors I ever talk to being sensible people who aren’t going to go “OH NO, PREMARITAL SEX” about it.
...I am confused and somewhat worried by the degree to which voters on this post seem to feel that it’s not an important heuristic to try to construct your community regulatory process in a way that doesn’t revolve around people’s sex lives, except in so far as the sex itself is per se a bad thing (eg nonconsensual or under conditions where positive consent could not reasonably be determined).
It’s not about the sex in and of itself, it’s about the conflict of interest and favouritism. Romantic love interest is enough for that too. EA could probably learn a lot from how mainstream orgs deal with this.
Yes—I almost can’t believe I am reading a senior EA figure suggesting that every major financial institution has an unreasonably prurient interest in the sex lives of their risk-holding employees. EA has just taken a bath because it was worse at financial risk assessment than it thought it was. The response here seems to be to double-down on the view that a sufficiently intelligent rationalist can derive—from first principles—better risk management than the lessons embedded in professional organisations. We have ample evidence that this approach did not work in the case of FTX funding, and that real people are really suffering because EA leaders made the wrong call here.
Now is the time to eat a big plate of epistemically humble crow, and accept that this approach failed horribly. Conspiracy theorising about ‘voting rings’ is a pretty terrible look.
I feel like people are mischarachterizing what Eliezer is saying. It sounds to me like he’s saying the following.
“Sure, the fact that the two were dating or having sex makes it even more obvious that something was amiss, but the real problem was obviously that Alameda and FTX were entangled from the very start with Sam having had total control of Alameda before he started FTX, and there were no checks and balances and so on, so why are you weirdos focusing on the sex part so much and ignore all the other blatant issues?!”
That seems like a very charitable reading of the comment
“who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like … none of which are being suggested here.”
I’d suggest that given the high stakes of the situation at the moment it is especially important not to inadvertently give the impression that EA leadership think they have privileged insight into financial risk management that they actually don’t. If EY has merely mangled his argument (as you suggest) it would be very sensible for him to edit his comment to reflect that, and apologise for implying that vote rigging was the only reason he could have been down voted.
That seems like a very charitable reading of the comment
I was commenting on his overall stance from his comments throughout the threads here, not only on that particular first comment. I agree that the part you cite doesn’t sound defensible. I considered his further comments to be admissions of “Okay, you all have a point, but …” (If I’m right with my interpretation, he could’ve been more clear about the part of “sorry, you all have a point and the initial comment was too crude.”)
Edit: FWIW, I thought the info/arguments you gave about why it’s common practice in finance to carefully monitor romantic or sexual conflicts of interests were compelling, and your point about how EAs maybe shouldn’t think they can do better based on first-principles reasoning also seems wise.
I’m under the impression that mainstream orgs deal with this rather poorly, by having the relationships still happen, but be Big Dark Forbidden Secrets instead of things that people are allowed to actually know about and take into account. But they Pretend to be acting with Great Propriety which is all that matters for the great kayfabe performance in front of those who’d perform pearl-clutching otherwise. People falsifying their romantic relationships to conform to ideas about required public image is part of our present culture of everything being fake; so what loves you forbid from being known and spoken of, by way of trying to forbid the loves themselves, you should forbid very hesitantly.
I think our current culture is better, even in light of current events, because I don’t think the standard culture would have actually prevented this bad outcome (unless almost any minor causal perturbance would’ve prevented it). It would mean that SBF/C’s relationship was just coming out now even though they’d previously supposedly properly broken up before setting up the two companies, or something—if we learned even that much!
One thing that people in mainstream orgs do, if they want to act with integrity, is resign from roles/go work somewhere else when they want to start a relationship that would create a conflict of interest whilst both are in their current positions (or if they value their job(s) more, give up on the idea of the relationship).
The first half of this comment is tangential at best. It’s also a bit odd to me how much you are defending “our current culture” when someone posted on the forum just yesterday expressing concerns about said culture.
...by not saying anything in favor of protecting some aspect of our current culture, when somebody else has just recently expressed concerns about it? That’s a rule?
First of all, this language is wildly unhelpful, even outside of the current situation.
Secondly, this isn’t close to true, and shows ignorance and blasé disregard for a wide range of social and power structures in the real world, such as corporate environments. This is not some leftist social justice statement. Even before the #metoo era, there’s a vast range of sexual conduct that wouldn’t be acceptable for a middle/senior leader.
Finally, you, Eliezer Yudkowsky, are a major part of the problem, with EA, as you put it. Your intellectual contributions are poorly regarded in the real world and across EA. This is not a “sneerclub” view, but by open minded outsiders, and senior EAs across all cause areas. Even in a full AI safety worldview, your recent views/contributions on AI safety have not been positive, and would have been an issue to work around, even without the changes in longtermist funding.
As a heterosexual male who has interacted with many female EAs, I almost never consider any sort of romantic and sexual relationship with any female EA, and absolutely not junior. This is partially the consequence of past events due to misconduct of others, of which I am entirely innocent of. Many other male EAs have similar personal policies, for the same reasons.
As the public content above shows, it is not salacious to say that even this basic idea above is ignored by others. I have plausible reasons to believe this is costly, and I resent the cost the movement bears, carrying on like this, due to this kind of behavior.
Note that this account is anonymous, but not because of a desire to communicate or criticize without retaliation, and my identity will be clear.
Morale is low right now and senior EA figures are occupied and some have come under direct criticism, whether justified or not. In this environment, it’s difficult to communicate or express leadership. Only the CEA community health team seems to be taking the initiative, which must be very difficult and this is heroic.
In this situation there is often gardening of the online space that tends to be performed by marginal actors. LW and MIRI has been left mostly unscathed by the FTX disaster, and now, Eliezer and Rob B (professional communicator employed by MIRI) are highly active. In the sense of advancing their cause, that’s OK and natural. They are also helpful in tempering gardening attempts by other actors.
Note that I don’t think Eliezer’s representations, such downplaying his interactions with FTX (he might have been a regrantor and was probably more actively jockeying/seeking money than it seems, which is understandable) and other statements are entirely truthful or disinterested.
More importantly, the low opinion of Eliezer’s contributions is well known, relevant and should be communicated. (The quality of his output and MIRI was considered low, which is why they received relatively little funding and were unscathed[1]).
The fact they were not funded by a bad person is not a sign of virtue or quality, to say this in “LW speak”, see Reversed Stupidity Is Not Intelligence.
As of writing, I want to point out that Eliezer’s comments, which are a probably strained digression to explain his original comment (and include speculation of a voting ring against him ???), has the following vote score.
My comments below this have:
This is not only not justifiable by content, this is literally suppressing criticism of a promoted EA figure on the EA forum.
Note that my comment is not policing or calling out private actual interpersonal conduct and seems well justified given the parent comment, as well as the wide range of topics discussed. A week ago, I think we all know that I would be voted down for factual, but off-color statements about SBF’s business practices.
Now, here, I make the additional, specific accusation that the existence of Eliezer Yudkowsky as a major public figure in EA is out of proportion to his contributions or his popularity in EA, and is partially supported by de facto organized coordination by a group of people on the LessWrong and Effective Altruism forums.
That is, my comment has been shared on say, private FB groups, Slack, Discord servers with the expected aim of managing this content.
I encourage examination of the view/vote graph and the origin of view/votes for my comment and possibly other content.
I am not attacking the cultures, views to help the world or ways of interpersonal relationships of people close to Eliezer. I do not want Eliezer to be harmed, or reduce his agency to contribute in the ways he wants to.
Eliezer is popular here. He founded LessWrong, MIRI and the AGI x-risk community. It’s not surprising you are getting downvoted for criticising his work (note I have not downvoted you, just explaining here).
Not just for criticism of his work but also for bringing this up in a totally unrelated context. If you’re (I mean the anonymous commenter) bothered by the way Eliezer dismisses concerns around “sex within orgs or close networks makes things messy and often ends badly,” I think that’s fair enough and I wouldn’t have downvoted your comment for it. But then adding that you think his intellectual contributions are also shit (or at least are seen as bad by people outside the movement) – that just seems a bit mean-spirited (besides IMO being wrong).
… I feel sad and uncomfortable about the commenters here criticizing Anonymous for “personally attacking” Eliezer, “bringing this up in a totally unrelated context”, being “mean-spirited”, etc.
It surely matters whether or not the intellectual contributions of someone in Eliezer’s reference class are bad, and in a world where they are bad, I care a lot more about learning that fact than about exactly which thread or subthread the discussion occurs on.
I’m glad you mention “besides IMO being wrong” at all. But where’s the objection that no supporting argument has been given? Where are the requests for specifics, so that it’s even possible to evaluate Anon’s claim by comparing notes about whether a given idea is a good intellectual contribution?
The problem with “More importantly, the low opinion of Eliezer’s contributions is well known” isn’t that it’s rude or off-topic; it’s that it’s maximally vague, more like a schoolyard taunt (“Oh, everyone knows X is lame, it’s so obvious I don’t even need to say why!”) than like a normal critique of someone’s intellectual output. If you think Eliezer’s wrong about tons of stuff, give some examples so those can be talked about, for goodness’ sake.
I agree that maximal vagueness is the much bigger issue with the intellectual criticism part of the comment than its unrelatedness and should also have said so. (And also via that vagueness implying that there’s a consensus where there IMO isn’t.)
I have, as it happens, a low opinion of Eliezer’s influence on EA (though I admit I’ve hardly read his stuff), but I still downvoted a generalized off-topic nasty personal attack.
Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links—it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.
My $0.02 - (almost) the entire financial and crypto world, including many prominent VC funds that invested in FTX directly seem to have been blindsided by the FTX blowup. So I’m less concerned about the ability to foresee that. However the 2018 dispute at Alameda seems like good reason to be skeptical, and I’m very curious what what was known by prominent EA figures, what steps they took to make it right and whether SBF was confronted about it before prominent people joined the future fund etc.
+1, I think people are applying too much hindsight here.* The main counter consideration: To the degree that EAs had info that VCs didn’t have, it should’ve made us do better.
*It’s still important to analyze what went wrong and learn from it.
I come from the traditional accounting/ internal audit where governance teams and internal controls at the very least are installed and due diligence is a best practice especially in large sums of money being distributed. I am new here to the EA community and have expected similar protocols are in place as large scale fraud is not some new thing—it had brought down the accounting profession in 2001 (Enron) and the mortgage crisis in 2008 (Lehman).
I guess what is clear to me is EA lacks the expertise on fraud / error detection, moreover has to make some improvements in the near future.
Sorry to be jumping in having never posted here, but I’ve been following along for a while and I’m fascinated.
Everything mentioned in the Sequoia piece about MacAskill’s involvement is strange. I’d be interested in hearing more about what his “pitch” was like back then:
“It was his fellow Thetans who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program.”
What was with this recruitment process? The notion of “signing on” jumps out at me. Also MacAskill is the one who told SBF to go to Jane Street?
“His course established, MacAskill gave SBF one last navigational nudge to set him on his way, suggesting that SBF get an internship at Jane Street that summer.”
That’s so weird. I get the idea of trying to spread the gospel, but has MacAskill ever spoken about his motives for...going around meeting college kids to...I don’t even know what the correct description would be. Gain acolytes?
I think what his public official motive would be is obvious: he’s always tried to get people to do things he thinks have positive altruistic impact-for example, by writing books advocating they do stuff-so he was doing the same with potentially influential people at a more 1-1 level. I don’t think this is something that’s ever been hidden! I can see why you might reasonably think this sort of influence seeking feels a bit off, since on some level it is an attempt to exercise power in a way the bypasses democracy. I’m sure someone has criticized it on those grounds. But organizations recruiting talented college students is quite normal in itself, even if they don’t usually have to sign on to a detailed philosophy. And even the latter is hardly unique: think of someone trying to network informally for people to get involved with their new libertarian think tank, or socialist magazine.
Oh, yeah, I totally agree. I don’t think of it as a way to bypass democracy or exercise undue influence. The main thing for me is that SBF and MacAskill are so interconnected. I thought it was primarily a philosophical connection, but the financial connection seems just as important, especially since MacAskill has been involved in every single part of SBF’s career. The first job at Jane Street, the arbitrage, the founding of Alameda, and now all the FTX crap.
Outside recent political donations, it seems that SBF was shoveling most of his donations money back into MacAskill’s organizations. (Someone else linked to his old blog, which gives a glimpse of this: http://measuringshadowsblog.blogspot.com/)
Now that SBF’s biggest endeavor has turned out to be a giant scam, it’s important to understand what MacAskill knew about everything and whether any of the same kind of financial misconduct is going on at any of the charitable organizations. I’m sure we’ll know a lot more soon, though.
‘Now that SBF’s biggest endeavor has turned out to be a giant scam’ I’m not sure that is quite right. As far as I can tell, the main definite issue that has been proven is that he stole money when he lost money, which is not quite the same as the endeavor itself being a scam from the beginning. Not that that is any morally better, and it was so deeply, deeply morally wrong that I strongly suspect SBF has done other very bad things, but it’s good to be precise I think. (I agree that Will had a severe conflict of interest, but I do think we have to stress that there’s no strong direct evidence of wrongdoing on his part yet, and that getting rich people to fund you is a standard charity/political party model used by everyone, even if known to be problematic and even if Will and SBF seem to have been unusually connected.)
They’re probably talking about the Giving What We Can pledge here, which MacAskill co-founded. I don’t see why anyone would consider that controversial.
It’s fair enough to feel betrayed in this situation, and to speak that out.
But given your position in the EA community, I think it’s much more important to put effort towards giving context on your role in this saga.
Some jumping-off points:
Did you consider yourself to be in a mentor / mentee relationship with SBF prior to the founding of FTX? What was the depth and cadence of that relationship?
e.g. from this Sequoia profile (archived as they recently pulled it from their site):
“The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.
… And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.””
What diligence did you / your team do on FTX before agreeing to join the Future Fund as an advisor?
[Edited to add: Were you aware of the 2018 dispute at Alameda re: SBF’s leadership? If so, how did this context factor into your decision to join the Future Fund?]
Did you have visibility into where money earmarked for Future Fund grants was being held?
Did you understand the mechanism by which FTX claimed to be generating revenue? Were the revenues they reported sanity-checked against a back-of-the-envelope estimate of how much their claimed mechanism would be able to generate?
What were your responsibilities at the Future Fund? How often were you in contact with SBF and other members of FTX leadership in your role as an advisor?
[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.] One additional thing I’d be curious about:
You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and there has—to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.
What was the reasoning behind the decision to support SBF by connecting him to Musk? How many people from FTXFF or EA at large were consulted to figure out if that was a good idea? Do you think that it still made sense at the point you helped with the potential acquisition to regard most of the wealth of SBF as EA resources? If not, why did you not inform the EA community?
Source for claim about playing a messenger: https://twitter.com/tier10k/status/1575603591431102464?s=20&t=lYY65-TpZuifcbQ2j2EQ5w
I don’t think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don’t benefit a ton from “the whole community weighs in” over “twenty smart advisors weighed in”.
But regardless, seems interesting and useful for EAs to debate this topic so we can form more models of this part of the strategy space—maybe we should be doing more to positively affect the world’s public fora. And I’d personally love to know more about Will’s reasoning re Twitter.
I don’t think this is true? Especially for decisions in the billions of dollars? Why do you think 20 smart advisors can spot all the problems that thousands of community members will, or even the major ones?
See Holden Karnofsky’s Some Thoughts on Public Discourse:
I think it’s important to note that many experts, traders, and investors did not see this coming, or they could have saved/made billions.
It seems very unfair to ask fund recipients to significantly outperform the market and most experts, while having access to way less information.
See this Twitter thread from Yudkowsky
Edit: I meant to refer to fund advisors, not (just) fund recipients
Lorenzo, I agree the expert traders and investor have more technical skills about investment. But it seems to me that MacAskill and FTX Future Fund board had more direct information about the personality of SBF and the personal connections among the leaders and the group dynamics. So, when it comes to your statement “having access to way less information”, I don’t think this is the case.
I’m not as sure about advisors, as I wrote here. Agree on recipients
I’ll go with you part of the way, but I also think that experts, traders, and even investors were further from SBF than at least some of the people in the equation here, which seems more and more true the more accounts I hear about people from Alameda saying they warned top brass. I wouldn’t expect an investor to have that kind of insight.
I think it’s good practice to try to understand a project’s business model and try to independently verify the implications of that model before joining the project.
My understanding is that FTX’s business model fairly straightforwardly made sense? It was an exchange, and there are many exchanges in the world that are successful and probably not fraudulent businesses (even in crypto—Binance, Coinbase, etc). As far as I can tell, the fraud was due to supporting specific failures of Alameda due to bad decisions, but wasn’t inherent to FTX making any money at all?
I’m gonna wait it out on this one.
I’d currently wildly guess that Coinbase is not a fraud.
I agree that it is less likely than Binance, based on the fact that public stock market companies are required to be more transparent[1], I do not know much about these particular companies.
of course Enron, Wirecard and others show that being listed on the stock market is no guarantee
Yeah, fair. I have no real knowledge of crypto, so am not particularly endorsing Binance
This seems to be “not even wrong”—FTX’s business model isn’t and never was in question. The issue is Sam committing fraud and misappropriating customer funds, and there being a total lack of internal controls at FTX that made this possible.
If you say that your business model is to hold depositor funds 1:1 and earn money from fees, but in fact you sometimes earn money via making trades with depositor funds, then you would be misrepresenting your business model.
Sure, and what is your point?
My current best guess is that WM quite reasonably understood FTX to be a crypto exchange with a legitimate business model earning money from fees—just like the rest of the world also thought. The fact that FTX was making trades with depositor funds was very likely to be a closely kept secret that no one at FTX was likely to disclose to an outsider. Why the hell would they—it’s pretty shady business!
Are you saying WM should have demanded to see proof that FTX’s money was being earned legitimately, even if he didn’t have any reason to believe it might not be? This seems to me like hindsight bias. To give an analogy—have you ever asked an employer of yours for proof that their activities aren’t fraudulent?
Not disagreeing with your overall point, but if my non-EA aligned, low-level crypto trader friend is any indication, then there certainly was reason to believe that SBF was at the very least doing some shady things. In August, I asked this friend for his thoughts on SBF, and this is what he replied:
“He’s obviously super smart but comes across a bit evil while trying to portray the good guy front. His exchange is notorious for liquidating user positions, listing shit coins thats prices trend to zero. He also founded Alameda research (trading / market maker firm) alongside FTX (the exchange). Alameda are one of the biggest crypto trading firms with predatory reputation. There’s also the issue of barely any divide between the exchange and the trading firm so alameda likely sees a lot of exchange data that gives them an edge trading on FTX vs other users.”
The irony is that this friend lost most of his savings because he was a FTX user.
Also from the Sequoia profile: “After SBF quit Jane Street, he moved back home to the Bay Area, where Will MacAskill had offered him a job as director of business development at the Centre for Effective Altruism.” It was precisely at this time that SBF launched Alameda Research, with Tara Mac Aulay (then the president of CEA) as a co-founder ( https://www.bloomberg.com/news/articles/2022-07-14/celsius-bankruptcy-filing-shows-long-reach-of-sam-bankman-fried).
To what extent was Will or any other CEA figure involved with launching Alameda and/or advising it?
Indeed, even by 2019, anyone who took a cursory look at Alameda’s materials would have known that they were engaged in Ponzi schemes or something equally fraudulent.
See https://twitter.com/DylanLeClair_/status/1591521505464455168
As you can see in that Twitter thread, Alameda was promising a guaranteed 15% rate of return on investments, with “no downside.” This is impossible. Only the likes of Bernie Madoff would pretend to have a risk-free return at that level. And the materials look so amateurish (with a number of grammatical errors) that the person posting on Twitter originally thought it was “so egregious my first thought was that it was fake.”
This was fairly close to the time of Alameda’s founding. As of 2019, how much were CEA folks (including Will and Tara) involved with Alameda’s obvious fraud?
Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I’ll leave it to them to share more if/when they want to, but I think it’s fair to say they left at least in part due to concerns about Sam’s business ethics. She’s had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam’s actions are used to tarnish Tara.
[Disclosure: Tara is my wife]
Strong agree, but in that case, it seems very unlikely that Will was unaware of these serious “concerns about Sam’s business ethics” back in 2018, and it seems all the more incumbent on him to offer an explanation as to why he kept such a close affiliation with SBF thereafter.
The returns shown in the document are not indicative of fraud—those sorts of returns are very possible when skilled traders deploy short-term trading strategies in inefficient markets, which crypto markets surely were at the time. The default risk when borrowing at 15% might have been very low, but not zero as they suggested. The “no downside” characterization should have been caught by a lawyer, and was misleading.
Nobody with an understanding of trading would have[EDIT] I would not have concluded they were engaged in Ponzi schemes or were misrepresenting their returns based on the document. There are plenty of sloppy, overoptimistic startup pitch decks out there, but most of the authors of those decks are not future Theranoses.Good points, Brian . . . I’m sure there are lots of overoptimistic pitch decks, and that a 15% return might be feasible, and maybe I’m just looking at this with the benefit of hindsight.
Even so, an investment firm normally doesn’t do anything like this, right? I mean, I assume that even Renaissance Technologies wouldn’t want to offer one single investment opportunity packaged as a loan with a legally guaranteed 15% rate of return with “no downside.” https://www.bloomberg.com/news/articles/2021-02-10/simons-makes-billions-while-renaissance-investors-fume-at-losses#xj4y7vzkg They might brag about their past returns, but would include lots of verbiage about the risks, and about how past performance is no guarantee of future returns, etc.
Thanks for your reply. I do think it would be unusual to see such promises, particularly from a firm looking for large investments. And I would expect to see a bunch of disclaimers, as you suggest. There might have been such language in the actual investment documents, but still. The excerpt shared on Twitter would have set off red flags for me because it seems sloppy and unprofessional, and it would have made me particularly concerned about their risk management, but I wouldn’t have concluded it was a Ponzi scheme or that there was something fraudulent going on with the reported returns.
It will be interesting to see if all of the FTX/Alameda fraud (if there was fraud, which seems very likely) took place after the most recent investment round. Investors may have failed not in financial diligence but in ensuring appropriate governance and controls (and, apparently, in assessing the character of FTX’s leadership).
Archived version (that gets around the paywall)
One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.
For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Alameda Research and FTX (see examples such as these), if this relationship were known to be hidden from investors and other stakeholders, should this not have raised red flags?
I believe that, even in the face of this particular disaster, who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like “maybe don’t fuck your direct report” or “if you’re recommending somebody for a grant, whom you have fucked, you ought to disclose this fact to the grantor” or “Notice when somebody in a position of power seems to be leaving behind a long trail of unhappy people they’ve fucked”, plus of course everything that shades over into harrassment, assault, and exploitation—none of which are being suggested here.
Outside of that, there’s a heck of a lot of people in this world fucking a heck of a lot of other people; most people who are fucking don’t blow up depository institutions; and controls and diligence on depository institutions should achieve reliability by some avenue other than checking which people are fucking. And I consider it generally harmful for a community to think that it has a right to pass judgment on fucking that is not like really clearly violating deontology. That’s not something that community members owe to a community.
As a Bloomberg article put it in September: https://www.pymnts.com/cryptocurrency/2022/bankman-frieds-stake-in-quant-trading-firm-raises-conflict-questions/
Are you trying to suggest that when two firms need to be at arms-length because of the potential for an enormous conflict of interest, it wouldn’t matter if the two firms’ chief executives were dating each other?
I’m saying that if your clearance process is unable to tell whether or not two firms are arms-length, when they have a great deal to potentially gain from illegally interoperating, without the further piece of info about whether the CEOs are dating, you’re screwed. This is like trying to fix the liar loan problem during the mortgage meltdown by asking whether the loan issuer is dating the loan recipient. The problem is not that, besides the profit motive, two people might also be fond of each other and that’s terrible; the problem is if your screening process isn’t enough to counterbalance the profit motive. A screening process that can make sure two firms aren’t colluding to illegally profit should not then break down if the CEOs go on a date.
Or to put it more compactly and specifically: Given the potential energy between Alameda and FTX as firms, not to mention their other visible degrees of prior entanglement, you’d have to be nuts to rely on an assurance process that made a big deal about whether or not the CEOs were dating.
Maybe even more compactly: Any time two firms could gain a lot of financial free energy by colluding, just pretend you’ve been told their CEOs are dating, okay, and now ask what assurances or tests you want to run past that point.
...I think there must be some basic element of my security mindset that isn’t being shared with voters here (if they’re not just a voting ring, a possibility that somebody else raised in comments), and I’m at somewhat of a loss for what it could be exactly. We’re definitely not operating in the same frame here; the things I’m saying here sure feel like obvious good practices from inside my frame.
Taking prurient interest in other people’s sex lives, trying to regulate them as you deem moral, is a classic easy-mode-to-fall-into of pontificating within your tribe, but it seems like an absurd pillar on which to rest the verification that two finance companies are not intermingling their interests. Being like “Oh gosh SBF and Caroline were dating, how improper” seems like finding this one distracting thing to jump on… which would super not be a key element of any correctly designed corporate assurance process about anything? You’d go audit their books and ask for proofs about crypto cold storage, not demand that somebody’s romance be a dark secret that nobody got to hear about?
We sure are working in different frames here, and I don’t understand the voters’ (if they’re not just a voting ring).
I work (indirectly) in financial risk management. Paying special attention to special categories of risk—like romantic relationships—is very fundamental to risk management. It is not that institutions are face with a binary choice of ‘manage risk’ or ‘don’t manage risk’ where people in romantic relationships are ‘managed’ and everyone else is ‘not’. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:
Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other’s behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.
Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your romantic partner embarrassment
Romantic relationships imply a different level of access than financial relationships. People in romantic relationships have levers to make their partner do things they might not want to—for example abusive relationships, threatening to end the relationship unless X is done, watching the partner enter their password into their computer to gain access to systems.
So if I were writing these rules, I might very well rephrase it as “do you have a very strong friendship with this other person” and “do you occasionally spend time at each other’s houses” to avoid both allonormativity and the temptation to prurient sniffing; and I’d work hard to keep any disclosed information of that form private, like “don’t store in Internet-connected devices or preferably on computers at all” private, to minimize incentives against honest disclosure. And even then, I might expect that among the consequences of the regulation, would be that CEOs in relationships would occasionally just lie to me about it, now that such incentives had been established against disclosure.
When you optimize against visible correlates of possible malfeasance, you optimize first and above all against visibility; and maybe secondarily against possible malfeasance if the visibility is very reliable and the correlations are strong enough to take causal leaning on them.
But, sure, if you know all that and you understand the consequences, then Sequoia could’ve asked if SBF and Caroline were in a relationship, understanding that a No answer might be a lie given the incentives they’d established, and that a Yes answer indicated unusual honesty.
I don’t really understand why you are describing this as a hypothetical (“If I were writing these rules...”). You are the founder and head of a highly visible EA organisation recieving charitable money from donors, and presumably have some set of policies in place to prevent staff at that organisation from systematically defrauding those donors behind your back. You have written those policies (or delegated someone else to write them for you). You are sufficiently visible in the EA space that your views on financial probity materially affect the state of EA discourse. What you are telling me is that the policies which you wrote don’t include a ‘no undeclared sexual relationships with people who are supposed to act as a check on you defrauding MIRI’ rule, based on your view that it is excessively paternalistic to inquire about people’s sex life when assessing risk, and that your view is that this is the position that should be adopted in EA spaces generally.
This is—to put it mildly—not the view of the vast majority of organisations which handle money at any significant scale. No sane risk management approach would equate a romantic relationship with ‘a very strong friendship’. Romantic love is qualitatively different to fraternal love. No sane risk management approach would equate “occasionally spend[ing] time at each other’s house” to living together. My wife is often alone in the house for extended periods of time, but I usually hang out with friends when they come over (to give just one difference from an enormous list of possibilities).
EA leadership—which includes you—has clearly made a catastrophic error of financial risk management with this situation. The extent to which they are ‘responsible’ is a fair debate, but it is unquestionable they failed to protect people who trusted them to steer EA into the future—hundreds of people have been made unemployed overnight and EA is potentially facing its first existential risk as a result. I am genuinely baffled how you can look at this situation and conclude that the approach you are describing—a very intelligent non-expert such as yourself creates their own standards of financial conduct at significant odds with the mainstream accepted approach—could still possibly be appropriate in the face of the magnitude of the error this thinking has led to.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
Somebody else in that thread was preemptively yelling “vote manipulation!” and “voting ring!”, and as much as it sounds recursively strange, this plus some voting patterns (early upvotes, then suddenly huge amounts of sudden downvoting) did lead me to suspect that the poster in question was running a bunch of fake accounts and voting with them.
We would in fact be concerned if it turned out that two people who were supposed to have independent eyes on the books were in a relationship and didn’t tell us! And we’d try to predictably conduct ourselves in such a mature, adult, understanding, and non-pearl-clutching fashion that it would be completely safe for those two people to tell the MIRI Board, “Hey, we’ve fallen in love, you need to take auditing responsibility off one of us and move it to somebody else” and have us respond to that in a completely routine, nonthreatening, and unexcited way that created no financial or reputational penalties for us being told about it.
That’s what I think is the healthy, beneficial, and actually useful for minimizing actual fraud in real life culture, of which I do think present EA has some, and which I think is being threatened by performative indignation.
I’m struggling to follow your argument here. What you describe as the situation at MIRI is basically standard risk management approach—if two people create a risk to MIRI’s financial security processes by falling in love, you make sure that neither signs off on risk taken by the other.
But in this thread you are responding with strong disagreement to a comment which says “if this relationship [between SBF and Caroline] were known to be hidden from investors and other stakeholders, should this not have raised red flags?”. You said “who EAs are fucking is none of EA’s business”, amongst other comments of a similar tone.
I don’t understand what exactly you disagree with if you agree SBF and Caroline should have disclosed their relationship so that proper steps could be taken to de-risk their interactions (as would happen at MIRI). It seems that you do agree it matters who EAs are fucking in contexts like this? And therefore that it is relevant to know whether Will MacAskill knew about the undisclosed relationship?
You could plausibly claim it gets disclosed to Sequoia Capital, if SC has shown themselves worthy of being trusted with information like that and responding to it in a sensible fashion eg with more thorough audits. Disclosing to FTX Future Fund seems like a much weirder case, unless FTX Future Fund is auditing FTX’s books well enough that they’d have any hope of detecting fraud—otherwise, what is FTXFF supposed to do with that information?
EA generally thinking that it has a right to know who its celebrity donors are fucking strikes me as incredibly unhealthy.
I think we might be straying from the main point a bit; nobody is proposing a general right to peer into EA sex lives, and I agree that would be unhealthy.
There are some relatively straightforward financial risk management principles which msinstream orgs have been successfully using for decades. You seem to believe one of the pillars of these principles—surfacing risk due to romantic entanglements between parties—shouldn’t apply to EA, and instead some sort of ‘commonsense’ approach should prevail instead (inverted commas because I think the standard way is basically common sense too).
But I don’t understand where your confidence that you’re right here is coming from—EA leadership has just materially failed to protect EA membership from bad actor risk stemming at least in part from a hidden conflict of interest due to a romantic entanglement. EA leadership has been given an opportunity to run risk management their way, and the result is that EA is now associated with the biggest crypto fraud in history. Surely the Bayesian update here is that there are strong reasons to believe mainstream finance had it approximately right?
Rereading the above, I think I might just be unproductively repeating myself at this point, so I’ll duck out of the discussion. I appreciated the respectful back-and-forth, especially considering parts of what I was saying were (unavoidably) pretty close to personal attacks on you and the EA leadership more broadly. Hope you had a pleasant evening too!
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was “hidden from investors and other stakeholders.”
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
I couldn’t quite bottom out exactly what EY was saying, but I’m pretty sure it wasn’t that. On your interpretation, EY said, “who EAs are fucking is none of [wider] EA’s business [except people who are directly affected by the COI]”. But he goes on to clarify “There are very limited exceptions to this rule like ‘maybe don’t fuck your direct report’ ”. If that’s an exception to the rule of EAs fucking being only of interest to directly affected parties, then it mean EY thinks an EA having sex with a subordinate should be broadcast to the entire community. That’s a very strict standard (although I guess not crazy—just odd that EY was presenting it as a more relaxed / less prurient standard than conventional financial risk management).
It also doesn’t address my core objection, which is that EA leadership failed very badly to implement proper financial risk management processes. Generally my point was that EA leadership should be epistemically humble now and just implement the risk management processes that work for banks, rather than tinkering around and introducing their own version of these systems. Regardless of what EY meant, unless he meant ‘We should hire in PWC to implement the same financial controls as every Fortune company’ then he is making exactly the same mistake EA leadership made with FTX—assuming that they could create better risk management from first principles than the mainstream system could from actual experience
By the way, I disagree with the objective position here too. Every FTX investor needed to know about the COI and the management strategy FTX adopted in order to assess their risk exposure. This would be the standard at a conventional company (if the company knew about such a blatant COI from their CEO and didn’t tell investors at a conventional company then their risk officers would potentially be liable for the fraud too, iirc)
Voting ring? That sounds preposterous to me
This comment is, at time of writing, sitting at −7 karma from 5 votes. Can someone who downvoted or strong downvoted this clarify why they did so?
What’s disappointing is not that Eliezer can’t make even a minor acknowledgement for the relevance of the models or experiences of others, that he is probably outright wrong on the substantial issues, but that Eliezer struggles to communicate and hold a thread in this conversation.
His counterpart is a literal domain expert and maybe very valuable talent to EA. (As a statement considering the totality of the votes and writing) this person is being badgered under what to any outsider should be the scary or unclear norms and power structures of the EA community on its own forum, while Eliezer’s de facto community keeps him afloat.
Elizier’s behavior is unacceptable for a funded, junior community builder, much less a senior leader.
Imagine a newcomer witnessing this, much less experiencing this.
I agree that it does not seem likely that there was a manipulation here with the votes (I casted strong-disagreement-vote on multiple comments by Eliezer on this page). But concerns about potential voting manipulation on this forum are reasonable by default, considering that it’s an open platform in which it’s technically possible for someone to vote from multiple anonymous accounts.
Fair enough, as long as that’s the standard that is applied to all commenters and not just EA leadership. I appreciate EY agreed that there was likely no manipulation after I pointed this out
It is extremely inaccurate to characterise the relationship between Bankman-Fried and Ellison as merely “dating”, and that people are merely saying that this was “improper”.
Bankman-Fried and Ellison were living together in a shared apartment. An unverified suggestion in one article suggests that Ellison may have been in some form of relationship with other residents of the apartment—residents who included the CTO and Director of Engineering of FTX.
If true, this is a genuinely alarming piece of information that would very obviously have caused anybody to question whether they should have placed their funds in the care of this specific group of people. However this information was not made public, and that lack of transparency is where the problem lies.
People who read this far seem to have upvoted
This statement is incredibly out of touch Eliezer. If CEO #1 and CEO #2 are in a romantic relationship, there is a clear conflict of interest here, especially when not disclosed to the public. In agreement with Anonymous, I also strongly oppose the language you’re using. I also agree with their comments regarding romantic relationships in the workplace. My general stance is 0 tolerance for workplace romance because it’s messy and there are far too many power dynamics at play.
Conflict of interest is the issue my friend. Unbiased decisions cannot happen when one has an other-than-work-relationship with the persons they are dealing with.
...You think it’s important to disclose this conflict of interest when you recommend a grant to someone, but not important when you as a CEO decide on a multi-billion dollar loan to the company where the other person is the CEO?
Disclose to who? The loan was blatantly bad; nobody in a position to take that disclosure should’ve given two flying floops whether the CEOs were dating or not.
Is the word “maybe” here just a style of writing? Should the EA community tolerate some cases in which person A is having sex with a person B who is a direct report of A (in an EA org)?
I realize that inserting hedge words can allow one to publish things using much less time and energy (which can be a very good reason to insert hedge words).
Because as somebody who could potentially be mistaken for a Leader I want to be pretty derned careful about laying down laws to regulate other people’s sexuality; and while something like that would definitely be a red flag at, like, idk, CEA or MIRI, maybe it’s different if we’re talking about a 3-person startup. Maybe you’ll say it’s still ill-advised, but I don’t know their circumstances and there’s also a difference between ill-advised and Forbidden. I feel a lot more comfortable leaving out the ‘maybe’ when I pontificate my legislation about informing a donor that your recommended grantee is one with whom you’ve had a relationship—though even there, come to think, I’m relying on all the donors I ever talk to being sensible people who aren’t going to go “OH NO, PREMARITAL SEX” about it.
...I am confused and somewhat worried by the degree to which voters on this post seem to feel that it’s not an important heuristic to try to construct your community regulatory process in a way that doesn’t revolve around people’s sex lives, except in so far as the sex itself is per se a bad thing (eg nonconsensual or under conditions where positive consent could not reasonably be determined).
It’s not about the sex in and of itself, it’s about the conflict of interest and favouritism. Romantic love interest is enough for that too. EA could probably learn a lot from how mainstream orgs deal with this.
Yes—I almost can’t believe I am reading a senior EA figure suggesting that every major financial institution has an unreasonably prurient interest in the sex lives of their risk-holding employees. EA has just taken a bath because it was worse at financial risk assessment than it thought it was. The response here seems to be to double-down on the view that a sufficiently intelligent rationalist can derive—from first principles—better risk management than the lessons embedded in professional organisations. We have ample evidence that this approach did not work in the case of FTX funding, and that real people are really suffering because EA leaders made the wrong call here.
Now is the time to eat a big plate of epistemically humble crow, and accept that this approach failed horribly. Conspiracy theorising about ‘voting rings’ is a pretty terrible look.
I feel like people are mischarachterizing what Eliezer is saying. It sounds to me like he’s saying the following.
“Sure, the fact that the two were dating or having sex makes it even more obvious that something was amiss, but the real problem was obviously that Alameda and FTX were entangled from the very start with Sam having had total control of Alameda before he started FTX, and there were no checks and balances and so on, so why are you weirdos focusing on the sex part so much and ignore all the other blatant issues?!”
That seems like a very charitable reading of the comment
“who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like … none of which are being suggested here.”
I’d suggest that given the high stakes of the situation at the moment it is especially important not to inadvertently give the impression that EA leadership think they have privileged insight into financial risk management that they actually don’t. If EY has merely mangled his argument (as you suggest) it would be very sensible for him to edit his comment to reflect that, and apologise for implying that vote rigging was the only reason he could have been down voted.
I was commenting on his overall stance from his comments throughout the threads here, not only on that particular first comment. I agree that the part you cite doesn’t sound defensible. I considered his further comments to be admissions of “Okay, you all have a point, but …” (If I’m right with my interpretation, he could’ve been more clear about the part of “sorry, you all have a point and the initial comment was too crude.”)
Edit: FWIW, I thought the info/arguments you gave about why it’s common practice in finance to carefully monitor romantic or sexual conflicts of interests were compelling, and your point about how EAs maybe shouldn’t think they can do better based on first-principles reasoning also seems wise.
I’m under the impression that mainstream orgs deal with this rather poorly, by having the relationships still happen, but be Big Dark Forbidden Secrets instead of things that people are allowed to actually know about and take into account. But they Pretend to be acting with Great Propriety which is all that matters for the great kayfabe performance in front of those who’d perform pearl-clutching otherwise. People falsifying their romantic relationships to conform to ideas about required public image is part of our present culture of everything being fake; so what loves you forbid from being known and spoken of, by way of trying to forbid the loves themselves, you should forbid very hesitantly.
I think our current culture is better, even in light of current events, because I don’t think the standard culture would have actually prevented this bad outcome (unless almost any minor causal perturbance would’ve prevented it). It would mean that SBF/C’s relationship was just coming out now even though they’d previously supposedly properly broken up before setting up the two companies, or something—if we learned even that much!
One thing that people in mainstream orgs do, if they want to act with integrity, is resign from roles/go work somewhere else when they want to start a relationship that would create a conflict of interest whilst both are in their current positions (or if they value their job(s) more, give up on the idea of the relationship).
The first half of this comment is tangential at best. It’s also a bit odd to me how much you are defending “our current culture” when someone posted on the forum just yesterday expressing concerns about said culture.
...are you suggesting that nobody ought to dare to defend aspects of our current culture once somebody has expressed concerns about them?
I’m suggesting that you show some humility.
...by not saying anything in favor of protecting some aspect of our current culture, when somebody else has just recently expressed concerns about it? That’s a rule?
I would have expected the opposite corner of the two axis voting (because I think people don’t like the language)
First of all, this language is wildly unhelpful, even outside of the current situation.
Secondly, this isn’t close to true, and shows ignorance and blasé disregard for a wide range of social and power structures in the real world, such as corporate environments. This is not some leftist social justice statement. Even before the #metoo era, there’s a vast range of sexual conduct that wouldn’t be acceptable for a middle/senior leader.
Finally, you, Eliezer Yudkowsky, are a major part of the problem, with EA, as you put it. Your intellectual contributions are poorly regarded in the real world and across EA. This is not a “sneerclub” view, but by open minded outsiders, and senior EAs across all cause areas. Even in a full AI safety worldview, your recent views/contributions on AI safety have not been positive, and would have been an issue to work around, even without the changes in longtermist funding.
As a heterosexual male who has interacted with many female EAs, I almost never consider any sort of romantic and sexual relationship with any female EA, and absolutely not junior. This is partially the consequence of past events due to misconduct of others, of which I am entirely innocent of. Many other male EAs have similar personal policies, for the same reasons.
As the public content above shows, it is not salacious to say that even this basic idea above is ignored by others. I have plausible reasons to believe this is costly, and I resent the cost the movement bears, carrying on like this, due to this kind of behavior.
Note that this account is anonymous, but not because of a desire to communicate or criticize without retaliation, and my identity will be clear.
This statement has been downvoted and removed from view. I’m pretty skeptical that is helpful.
Perhaps the ditch the “Your intellectual contributions are poorly regarded” thread; at best, it is unsupported & off-topic
Morale is low right now and senior EA figures are occupied and some have come under direct criticism, whether justified or not. In this environment, it’s difficult to communicate or express leadership. Only the CEA community health team seems to be taking the initiative, which must be very difficult and this is heroic.
In this situation there is often gardening of the online space that tends to be performed by marginal actors. LW and MIRI has been left mostly unscathed by the FTX disaster, and now, Eliezer and Rob B (professional communicator employed by MIRI) are highly active. In the sense of advancing their cause, that’s OK and natural. They are also helpful in tempering gardening attempts by other actors.
Note that I don’t think Eliezer’s representations, such downplaying his interactions with FTX (he might have been a regrantor and was probably more actively jockeying/seeking money than it seems, which is understandable) and other statements are entirely truthful or disinterested.
More importantly, the low opinion of Eliezer’s contributions is well known, relevant and should be communicated. (The quality of his output and MIRI was considered low, which is why they received relatively little funding and were unscathed[1]).
The fact they were not funded by a bad person is not a sign of virtue or quality, to say this in “LW speak”, see Reversed Stupidity Is Not Intelligence.
I think this was downvoted in the first 10 seconds, I am bumping this so this gets read.
As of writing, I want to point out that Eliezer’s comments, which are a probably strained digression to explain his original comment (and include speculation of a voting ring against him ???), has the following vote score.
My comments below this have:
This is not only not justifiable by content, this is literally suppressing criticism of a promoted EA figure on the EA forum.
Note that my comment is not policing or calling out private actual interpersonal conduct and seems well justified given the parent comment, as well as the wide range of topics discussed. A week ago, I think we all know that I would be voted down for factual, but off-color statements about SBF’s business practices.
Now, here, I make the additional, specific accusation that the existence of Eliezer Yudkowsky as a major public figure in EA is out of proportion to his contributions or his popularity in EA, and is partially supported by de facto organized coordination by a group of people on the LessWrong and Effective Altruism forums.
That is, my comment has been shared on say, private FB groups, Slack, Discord servers with the expected aim of managing this content.
I encourage examination of the view/vote graph and the origin of view/votes for my comment and possibly other content.
I am not attacking the cultures, views to help the world or ways of interpersonal relationships of people close to Eliezer. I do not want Eliezer to be harmed, or reduce his agency to contribute in the ways he wants to.
Eliezer is popular here. He founded LessWrong, MIRI and the AGI x-risk community. It’s not surprising you are getting downvoted for criticising his work (note I have not downvoted you, just explaining here).
Not just for criticism of his work but also for bringing this up in a totally unrelated context. If you’re (I mean the anonymous commenter) bothered by the way Eliezer dismisses concerns around “sex within orgs or close networks makes things messy and often ends badly,” I think that’s fair enough and I wouldn’t have downvoted your comment for it. But then adding that you think his intellectual contributions are also shit (or at least are seen as bad by people outside the movement) – that just seems a bit mean-spirited (besides IMO being wrong).
… I feel sad and uncomfortable about the commenters here criticizing Anonymous for “personally attacking” Eliezer, “bringing this up in a totally unrelated context”, being “mean-spirited”, etc.
It surely matters whether or not the intellectual contributions of someone in Eliezer’s reference class are bad, and in a world where they are bad, I care a lot more about learning that fact than about exactly which thread or subthread the discussion occurs on.
I’m glad you mention “besides IMO being wrong” at all. But where’s the objection that no supporting argument has been given? Where are the requests for specifics, so that it’s even possible to evaluate Anon’s claim by comparing notes about whether a given idea is a good intellectual contribution?
The problem with “More importantly, the low opinion of Eliezer’s contributions is well known” isn’t that it’s rude or off-topic; it’s that it’s maximally vague, more like a schoolyard taunt (“Oh, everyone knows X is lame, it’s so obvious I don’t even need to say why!”) than like a normal critique of someone’s intellectual output. If you think Eliezer’s wrong about tons of stuff, give some examples so those can be talked about, for goodness’ sake.
I agree that maximal vagueness is the much bigger issue with the intellectual criticism part of the comment than its unrelatedness and should also have said so. (And also via that vagueness implying that there’s a consensus where there IMO isn’t.)
I have, as it happens, a low opinion of Eliezer’s influence on EA (though I admit I’ve hardly read his stuff), but I still downvoted a generalized off-topic nasty personal attack.
Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links—it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.
Romantic love is a lot more intense than mere friendship! Makes conflicts of interest way more likely.
My $0.02 - (almost) the entire financial and crypto world, including many prominent VC funds that invested in FTX directly seem to have been blindsided by the FTX blowup. So I’m less concerned about the ability to foresee that. However the 2018 dispute at Alameda seems like good reason to be skeptical, and I’m very curious what what was known by prominent EA figures, what steps they took to make it right and whether SBF was confronted about it before prominent people joined the future fund etc.
+1, I think people are applying too much hindsight here.* The main counter consideration: To the degree that EAs had info that VCs didn’t have, it should’ve made us do better.
*It’s still important to analyze what went wrong and learn from it.
Hi Milan,
I come from the traditional accounting/ internal audit where governance teams and internal controls at the very least are installed and due diligence is a best practice especially in large sums of money being distributed. I am new here to the EA community and have expected similar protocols are in place as large scale fraud is not some new thing—it had brought down the accounting profession in 2001 (Enron) and the mortgage crisis in 2008 (Lehman).
I guess what is clear to me is EA lacks the expertise on fraud / error detection, moreover has to make some improvements in the near future.
All the best,
Miguel
+1 on this. It is painfully clear that we need to radically improve our practices relating to due diligence moving forward.
Sorry to be jumping in having never posted here, but I’ve been following along for a while and I’m fascinated.
Everything mentioned in the Sequoia piece about MacAskill’s involvement is strange. I’d be interested in hearing more about what his “pitch” was like back then:
What was with this recruitment process? The notion of “signing on” jumps out at me. Also MacAskill is the one who told SBF to go to Jane Street?
That’s so weird. I get the idea of trying to spread the gospel, but has MacAskill ever spoken about his motives for...going around meeting college kids to...I don’t even know what the correct description would be. Gain acolytes?
I think what his public official motive would be is obvious: he’s always tried to get people to do things he thinks have positive altruistic impact-for example, by writing books advocating they do stuff-so he was doing the same with potentially influential people at a more 1-1 level. I don’t think this is something that’s ever been hidden! I can see why you might reasonably think this sort of influence seeking feels a bit off, since on some level it is an attempt to exercise power in a way the bypasses democracy. I’m sure someone has criticized it on those grounds. But organizations recruiting talented college students is quite normal in itself, even if they don’t usually have to sign on to a detailed philosophy. And even the latter is hardly unique: think of someone trying to network informally for people to get involved with their new libertarian think tank, or socialist magazine.
Oh, yeah, I totally agree. I don’t think of it as a way to bypass democracy or exercise undue influence. The main thing for me is that SBF and MacAskill are so interconnected. I thought it was primarily a philosophical connection, but the financial connection seems just as important, especially since MacAskill has been involved in every single part of SBF’s career. The first job at Jane Street, the arbitrage, the founding of Alameda, and now all the FTX crap.
Outside recent political donations, it seems that SBF was shoveling most of his donations money back into MacAskill’s organizations. (Someone else linked to his old blog, which gives a glimpse of this: http://measuringshadowsblog.blogspot.com/)
Now that SBF’s biggest endeavor has turned out to be a giant scam, it’s important to understand what MacAskill knew about everything and whether any of the same kind of financial misconduct is going on at any of the charitable organizations. I’m sure we’ll know a lot more soon, though.
‘Now that SBF’s biggest endeavor has turned out to be a giant scam’ I’m not sure that is quite right. As far as I can tell, the main definite issue that has been proven is that he stole money when he lost money, which is not quite the same as the endeavor itself being a scam from the beginning. Not that that is any morally better, and it was so deeply, deeply morally wrong that I strongly suspect SBF has done other very bad things, but it’s good to be precise I think. (I agree that Will had a severe conflict of interest, but I do think we have to stress that there’s no strong direct evidence of wrongdoing on his part yet, and that getting rich people to fund you is a standard charity/political party model used by everyone, even if known to be problematic and even if Will and SBF seem to have been unusually connected.)
They’re probably talking about the Giving What We Can pledge here, which MacAskill co-founded. I don’t see why anyone would consider that controversial.