Why would Will want SBF to buy Twitter, and think it worth billions? Apart from thinking it was a great business investment, a strong contender for the reason is ‘propaganda for our values’. That’s not very integrity-like. (If anyone can fill in the gaps there, please do.) It’s hard to read the proposal as only being motivated by investing, because Will says in his opening DM: “Sam Bankman-Fried has for a while been potentially interested in purchasing it and then making it better for the world”
It’s an example of how EA was too trusting of SBF
Seems like poor judgement given the price tag
A general sense that I would be ashamed for this to leak if I were Will (I had this sense before recent revelations about SBF).[1]
So I would very much appreciate an explanation by Will of what his motive was here, and who he consulted on this monumental decision. If nothing else, it would model transparency and accountability.
I should have been more public about my feelings at the time, but didn’t out of I guess cowardice and not wanting to tarnish EA rep — which is a dishonourable impulse
Feels a bit weird to me that you are speaking about “EA” doing something here, as it seems pretty clear that this was Will acting in a personal capacity.
(This is in no way trying to defend his actions, but I think it’s an important difference. )
Edit: This comment refered to an earlier version of David’s comment that talked about EA wanting to buy twitter, etc.
I think it’d be easy to come up with highly impactfwl things to do with free reign over Twitter? Like, even before I’ve thought about it, there should be a high prior on usefwl patterns. Brainstorming:
Experiment with giving users control over recommender algorithms, and/or designing them to be in the long-term interests of the users themselves (because you’re ok with foregoing some profit in order to not aggressively hijacking people’s attention)
Optimising the algorithms for showing users what they reflectively prefer (eg. what do I want to want to see on my Twitter feed?)[1]
Optimising algorithms for making people kinder (eg. downweighting views that come from bandwagony effects and toxoplasma), but still allowing users to opt-out or opt-in, and clearly guiding them on how to do so.
I can see several potential benefits to this, but most of the considerations are unknown to me, which just means that there could still be massive value that I haven’t seen yet.
This could be used to overcome Vingean deference limits and allow for hiring more competent people more reliably than academic credentials (I realise I’m not explaining this, I’m just pointing to the existence of ideas enabled with Twitter)
This could also be a way to “vote” for political candidates or decision-makers in general too, or be used as a trust metric to find out whether you want to vote for particular candidates in the first place.
Platform to arrange vote swapping and similar, allow for better compromises and reduce hostile zero-sum voting tendencies.
Platform for highly visible public assurance contracts (eg. here), could be potentially be great for cooperation between powerfwl actors or large groups of people.
This also enables more visibility for views that held back by pluralistic ignorance. This could be both good and bad, depending on the view (eg. both “it’s ok to be gay” and “it’s not ok to be gay” can be held back by pluralistic ignorance).
Could also be used to coordinate actions in a crisis
eg. the next pandemic is about to hit, and it’s a thousand times more dangerous than covid, and no one realises because it’s still early on the exponential curve. Now you utilise your power to influence people to take it seriously. You stop caring about whether this will be called “propaganda” because what matters isn’t how nice you’ll look to the newspapers, what matters is saving people’s lives.
Mostly, even if I thought Sam was in the wrong for considering a deal with Elon, I find it strange to cast a negative light on Will for putting them in touch. That seems awfwly transitive. I think judgments for transitive associations are dangerous, especially given incomplete information. Sam/Will probably thought much longer on this than I have, so I don’t think I can justifiably fault their judgment even if I had no ideas on how to use twitter myself.
This idea was originally from a post by Paul Christiano some years ago where he urged FB to adopt an algorithm like this, but I can’t seem to find it rn.
Very good comment. I now think that buying Twitter could make sense. (Partly also because I realised that if Twitter is an investment that makes you money, any impact on top is kind of costless. It’s not the case that either the motivation was ‘buy Twitter to make it better by our lights’ or ‘make more money’.)
I still agree with judgement for helping someone do something you think might be bad, or, as you call it, transitive associations (see my comment here for more detail).
I also still think it would be a good move for Will to explain himself what was going on, for the sake of modelling transparency.
+1 - when I saw those messages, I felt uncomfortable about it, because I couldn’t imagine any good reason for EAs to want to buy Twitter, and because it hadn’t been discussed publicly at all, that I saw. So if SBF had bought Twitter—with or without Will’s advice/support/backing—with the money that he said he was planning to use for EA causes, that would have seemed inappropriately unilateral to me (even if the recent fraud stuff hadn’t happened.)
Why would Will want SBF to buy Twitter, and think it worth billions? Apart from thinking it was a great business investment, a strong contender for the reason is ‘propaganda for our values’.
I don’t consider this a plausible motivation at all. Even assuming Will’s judgment wasn’t great here, he’s clearly smart enough to know that it would make an incredibly bad impression to run a site this large, associated with free speech (or lack thereof), and turn it into something that’s ideologically biased. You’d have to be incredibly naive to think that this has a shot at going well (and that’s almost the bigger issue than it being “not very high-integrity like”). In any case, I think it’s much more likely Will wanted to raise the world’s sanity waterline by improving public discourse norms. (Emrik made a comment with some example ideas.) (Edit: Just saw you already replied to Emrik and changed your mind somewhat, very cool!)
Another great point, transparency—I want Wil to be transparent as to his involvement in that given his reach and influence on EA, he should be able to explain his thought process as to what was the objective of his endorsement.
What I think was shady here:
Why would Will want SBF to buy Twitter, and think it worth billions? Apart from thinking it was a great business investment, a strong contender for the reason is ‘propaganda for our values’. That’s not very integrity-like. (If anyone can fill in the gaps there, please do.) It’s hard to read the proposal as only being motivated by investing, because Will says in his opening DM: “Sam Bankman-Fried has for a while been potentially interested in purchasing it and then making it better for the world”
It’s an example of how EA was too trusting of SBF
Seems like poor judgement given the price tag
A general sense that I would be ashamed for this to leak if I were Will (I had this sense before recent revelations about SBF).[1]
So I would very much appreciate an explanation by Will of what his motive was here, and who he consulted on this monumental decision. If nothing else, it would model transparency and accountability.
I should have been more public about my feelings at the time, but didn’t out of I guess cowardice and not wanting to tarnish EA rep — which is a dishonourable impulse
Feels a bit weird to me that you are speaking about “EA” doing something here, as it seems pretty clear that this was Will acting in a personal capacity.
(This is in no way trying to defend his actions, but I think it’s an important difference. )
Edit: This comment refered to an earlier version of David’s comment that talked about EA wanting to buy twitter, etc.
I’ve edited the comment now. I agree that Will’s actions are not EA’s actions, and I phrased it weirdly.
I was assuming that any reason Will might have wanted SBF to buy Twitter would be justified in terms of benefit to EA.
So to be clearer, the question to Will would be “Why would it be in the interest of EA for you to facilitate someone close to EA to buy Twitter?”
I think it’d be easy to come up with highly impactfwl things to do with free reign over Twitter? Like, even before I’ve thought about it, there should be a high prior on usefwl patterns. Brainstorming:
Experiment with giving users control over recommender algorithms, and/or designing them to be in the long-term interests of the users themselves (because you’re ok with foregoing some profit in order to not aggressively hijacking people’s attention)
Optimising the algorithms for showing users what they reflectively prefer (eg. what do I want to want to see on my Twitter feed?)[1]
Optimising algorithms for making people kinder (eg. downweighting views that come from bandwagony effects and toxoplasma), but still allowing users to opt-out or opt-in, and clearly guiding them on how to do so.
Trust networks
Liquid democracy-like transitive trust systems (eg. here, here)
I can see several potential benefits to this, but most of the considerations are unknown to me, which just means that there could still be massive value that I haven’t seen yet.
This could be used to overcome Vingean deference limits and allow for hiring more competent people more reliably than academic credentials (I realise I’m not explaining this, I’m just pointing to the existence of ideas enabled with Twitter)
This could also be a way to “vote” for political candidates or decision-makers in general too, or be used as a trust metric to find out whether you want to vote for particular candidates in the first place.
Platform to arrange vote swapping and similar, allow for better compromises and reduce hostile zero-sum voting tendencies.
Platform for highly visible public assurance contracts (eg. here), could be potentially be great for cooperation between powerfwl actors or large groups of people.
This also enables more visibility for views that held back by pluralistic ignorance. This could be both good and bad, depending on the view (eg. both “it’s ok to be gay” and “it’s not ok to be gay” can be held back by pluralistic ignorance).
Could also be used to coordinate actions in a crisis
eg. the next pandemic is about to hit, and it’s a thousand times more dangerous than covid, and no one realises because it’s still early on the exponential curve. Now you utilise your power to influence people to take it seriously. You stop caring about whether this will be called “propaganda” because what matters isn’t how nice you’ll look to the newspapers, what matters is saving people’s lives.
Something-something nudging idk.
Mostly, even if I thought Sam was in the wrong for considering a deal with Elon, I find it strange to cast a negative light on Will for putting them in touch. That seems awfwly transitive. I think judgments for transitive associations are dangerous, especially given incomplete information. Sam/Will probably thought much longer on this than I have, so I don’t think I can justifiably fault their judgment even if I had no ideas on how to use twitter myself.
This idea was originally from a post by Paul Christiano some years ago where he urged FB to adopt an algorithm like this, but I can’t seem to find it rn.
Very good comment. I now think that buying Twitter could make sense. (Partly also because I realised that if Twitter is an investment that makes you money, any impact on top is kind of costless. It’s not the case that either the motivation was ‘buy Twitter to make it better by our lights’ or ‘make more money’.)
I still agree with judgement for helping someone do something you think might be bad, or, as you call it, transitive associations (see my comment here for more detail).
I also still think it would be a good move for Will to explain himself what was going on, for the sake of modelling transparency.
+1 - when I saw those messages, I felt uncomfortable about it, because I couldn’t imagine any good reason for EAs to want to buy Twitter, and because it hadn’t been discussed publicly at all, that I saw. So if SBF had bought Twitter—with or without Will’s advice/support/backing—with the money that he said he was planning to use for EA causes, that would have seemed inappropriately unilateral to me (even if the recent fraud stuff hadn’t happened.)
I don’t consider this a plausible motivation at all. Even assuming Will’s judgment wasn’t great here, he’s clearly smart enough to know that it would make an incredibly bad impression to run a site this large, associated with free speech (or lack thereof), and turn it into something that’s ideologically biased. You’d have to be incredibly naive to think that this has a shot at going well (and that’s almost the bigger issue than it being “not very high-integrity like”). In any case, I think it’s much more likely Will wanted to raise the world’s sanity waterline by improving public discourse norms. (Emrik made a comment with some example ideas.) (Edit: Just saw you already replied to Emrik and changed your mind somewhat, very cool!)
Another great point, transparency—I want Wil to be transparent as to his involvement in that given his reach and influence on EA, he should be able to explain his thought process as to what was the objective of his endorsement.