The problem with Kat’s text is that it’s a very thinly veiled threat to end someone’s career in an attempt to control Nonlinear’s image. There is no context that justifies such a threat.
David M
[Question] How would a language model become goal-directed?
Isn’t that a bit self-aggrandising? I prefer “aspiring EA-adjacent”
Thanks for all you do.
I feel that changing the nature of the Maximum Impact Fund in this way should come with a renaming of the fund, since it is now no longer going all-out on expected value; whereas before it was “maximizing” expected “impact”, it’s no longer doing that. And many donors have come to expect that the MIF is the go-to for high EV donation, and will not notice this change.
Something like the ‘Top Charities Fund’ or ‘High Impact Fund’ flags the fundamental change, and is a bit less misleading.
This statement doesn’t disavow the idea of funding neo-Nazism, and the lacuna is worrying: by convention (pragmatics), omitting to comment on the salient thing is taken as a comment in itself. Have you sought advice from communications specalists? If not, it would be well worth it to avoid unnecessary misinterpretation, if you want to disavow the main allegation.
Here are the main bits that stood out to me as suboptimal communication.
-
I would like to understand why you decided to reject the grant proposal after doing due diligence. Was it because of their far-right politics, or a conflict of interest, reputational hazard, or something else?
The Future of Life Institute makes no apologies for engaging with many people across the immensely diverse political spectrum
I wish you would not imply your critics are politically narrow-minded for being worried FLI is alleged to have considered supporting a neo-Nazi outlet. I would like to understand if there are any limits here—are there any political views you are not willing to support?
-
Seems prima facie bad that orgs in the Effective Ventures umbrella ask employees, contractors and volunteers to sign NDAs without reassurance of whistleblower protections. Even if the content of the NDA is mundane, it will have a chilling effect.
Yeah, I’ll note because the memory might slip away that my initial reaction to the TIME article paragraph about Owen was:
- horror/disgust
- hope that the person was not as central as implied in the text
- (get distracted by my own work/life and allow the news to slip into the background of my mind, and allow the hope to transform into an implicit feeling that the person was, hopefully, not as central as Owen was)
- have an unjustified implicit belief that the person is not core to EA
- find out that that was wrong <-- I am here, and the only reason I can detect my previous implicit beliefs is from the current feeling of surprisal
Calling this an ‘inquisition’ is hyperbolic. What I see is a small number of people expressing critical views and feelings, and seeking answers from FLI. I would far prefer a world in which people feel entitled to do that, than one where it’s discouraged. When I imagine the alternative, I imagine a world in which we automatically assume good intent on the part of any authority figure alleged to have done something bad, or one in which people are too polite or timid to speak out, etc.
It seems as if you find outrage in response to misdeeds more offensive than the misdeeds themselves, because you offer support without conditionalising on how bad the misdeeds are (“however questionable” they are).
In fact, there is a point beyond which the badness of believing one is in the right vastly overshadows anything respectable about sticking to one’s convictions. When someone has done something clearly bad, let’s say corruption, and doesn’t agree one has done anything wrong, the lack of apology, while technically virtuous, deserves far less praise than the disagreement deserves censure. So my position is “depending on how bad the actions were, FLI should apologise or not apologise, and we should criticise or punish them in proportion to how bad they were”.
Save the Date: EAGxCambridge (UK)
Pause For Thought: The AI Pause Debate (Astral Codex Ten)
I love this.
I feel like do good better. on a t-shirt gives off ‘holier than thou’ vibes. Especially with that period. It’s easy to read it as ‘better than other people’ rather than ‘better than me in the past’.
What I think was shady here:
Why would Will want SBF to buy Twitter, and think it worth billions? Apart from thinking it was a great business investment, a strong contender for the reason is ‘propaganda for our values’. That’s not very integrity-like. (If anyone can fill in the gaps there, please do.) It’s hard to read the proposal as only being motivated by investing, because Will says in his opening DM: “Sam Bankman-Fried has for a while been potentially interested in purchasing it and then making it better for the world”
It’s an example of how EA was too trusting of SBF
Seems like poor judgement given the price tag
A general sense that I would be ashamed for this to leak if I were Will (I had this sense before recent revelations about SBF).[1]
So I would very much appreciate an explanation by Will of what his motive was here, and who he consulted on this monumental decision. If nothing else, it would model transparency and accountability.
- ↩︎
I should have been more public about my feelings at the time, but didn’t out of I guess cowardice and not wanting to tarnish EA rep — which is a dishonourable impulse
EAGxCambridge 2023 Retrospective
A related problem I’ve experienced is that it’s hard for a 2-person conversation to spontaneously grow, because of the problem of “I want to go up and say hi, but what if I’m interrupting a booked 1:1?”
19 hours later the posts have dropped off the front page
On mobile so can’t upload a screenshot, but I have one
For information, CEA’s OP links to an explanation of impartiality:
Impartial altruism: We believe that all people count equally. Of course it’s reasonable to have special concern for one’s own family, friends and life. But, when trying to do as much good as possible, we aim to give everyone’s interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.
EAGxCambridge 2023
Retaliation is bad. If you think doing X is bad, then you shouldn’t do X, even if you’re ‘only doing it to make the point that doing X is bad’.
AIM simply doesn’t rate AI safety as a priority cause area. It’s not any particular organisation’s job to work on your favourite cause area. They are allowed to have a different prioritisation from you.
I read the author’s intention, when she makes the case for ‘forgiveness as a virtue’, as a bid to (1) seem more virtuous herself, and (2) make others more likely to forgive her (since she was so generous to her accusers—at least in that section—and we want to reciprocate generosity). I think this is an effective persuasive writing technique, but is not relevant to the questions at issue (who did what).
Another related ‘persuasive writing’ technique I spotted was that, in general, Kat is keen to phrase the hypothesis where Nonlinear did bad things in an extreme way—effectively challenging skeptics “so, you saying we’re completely evil moustache-twirling vagabonds from out of a children’s fairytale?”. That’s a straw person, because what’s at issue is the overall character of Nonlinear staff, not whether they’re cartoon villains. The word ‘witch’ is used 7 times in this post, and ‘evil’ half a dozen times too. Quote:
> 2 EAs are Secretly Evil Hypothesis: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers publicly, behind closed doors they are ill-intentioned ne’er do wells.