Posting as an individual who is a consultant, not on behalf of my employer
Let me start off by saying that’s an interesting question, and one I can’t give a highly confident answer to because I don’t know that I’ve ever had a conversation with a colleague about truth qua truth.
That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn’t find it especially hard to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
My longer answer...
I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it’s a tough message for the client to hear. [Indeed I’ve explicitly heard firm leadership share examples of such behavior… which I think is an indicator that a) it does happen but b) it’s not a given which ties to...]
...I think there’s a recognition that at the end of the day, we have formal signed statements of work regarding what our clients expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to “truth”
If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
To your 2nd question about >70%:
I don’t think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think “the question behind your question” is something else)
That said, my “safe,” not-super-helpful, and please-don’t-selectively-quote-this-out-of-context answer is less than half the time...
...But that’s because most of the work I (and I’d venture to say, most of us) do isn’t about truth-seeking, so it’s not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client’s hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)
While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling. My run-through of the perceived problems within EA that the author describes and my reactions:
The dominance of philosophy. I personally find parts of long-termism kooky and I’m not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)… it feels more like shoehorning a critique of EA in to a piece on SBF?
Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization.
If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to.
I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.]
3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but:
a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?]
b) More importantly… the author doesn’t explain how unabashed utilitarianism led to SBF’s actions—it’s sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow.
4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF’s take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added). SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.
One EA critique in the piece that resonated with me—and I’m not sure I’d seen put so succinctly elsewhere is:
While not about SBF, it’s a point I don’t see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it.
TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or—perhaps more likely—editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting EA to SBF’s actions.