And of course a lot less positive attention. Indeed a very substantial fraction of the current non-OP funding (like Vitalik’s and Jed’s and Jaan’s funding) is downstream of these “weirder” things. “Less is less” requires there to be less, but my sense is that your actions substantially decrease the total support that EA and adjacent efforts will receive both reputationally and financially.
Can you say more about that? You think our prior actions caused additional funding from Vitalik, Jed, and Jaan?
We’re still going to be funding a lot of weird things. I just think we got to a place where the capital felt ~infinite and we assumed all the other inputs were ~infinite too. AI safety feels like it deserves more of those resources from us, specifically, in this period of time. I sincerely hope it doesn’t always feel that way.
I don’t know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world’s most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
more serious consideration from the world’s smartest people,
or more people respecting the integrity and honesty of the people working on AI Safety,
or the ability for people to successfully coordinate around the safe development of extremely powerful AI systems,
I highly doubt that the changes you made will achieve this, and am indeed reasonably confident they will harm them (in as much as your aim is to just have fewer people get annoyed at you, my guess is you still have chosen a path of showing vulnerability to reputational threats that will overall increase the unpleasantness of your life, but I have less of a strong take here).
In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas. Lightcone Infrastructure obviously works on AI Safety, but apparently not in a way that would allow Open Phil to grant to us (despite what seems to me undeniably large effects on thousands of people working on AI Safety, in myriad ways).
Based on the email I have received, and things I’ve picked up through the grapevine, you did not implement something that is best described as “reducing the number of core priority areas” but instead is better described as “blacklisting various specific methods, associations or conclusions that people might arrive at in the pursuit of the same aims as you have”. That is what makes me much more concerned about the negative effects here.
The people I know who are working on digital minds are clearly doing so because of their models of how AI will play out, and this is their best guess at the best way to make the outcomes of that better. I do not know what they will find in their investigation, but it sure seems directly relevant to specific technical and strategic choices we will need to make, especially when pursuing projects like AI Control as opposed to AI Safety.
AI risk is too complicated of a domain to enshrine what conclusions people are allowed to reach. Of course, we still need to have standards, but IMO those standards should be measured in intellectual consistency, accurate predictions, and a track record of improving our ability to take advantage of new opportunities, not in distant second-order effects on the reputation of one specific actor, and their specific models about political capital and political priorities.
It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here. I’m trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond “reputational risks”, which narrows the mind too much on what is going on here).
>> In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don’t really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
The aggregate portfolio of Open Phil and Good Ventures is quite bad for the world… I also genuinely think you are causing pretty huge harm for the world.
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
This comment overall isn’t obviously immediately relevant
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here.
Sorry, maybe I missed something, where did I imply you have a history of lying? I don’t currently believe that Open Phil or you have a history of lying. I think we have disagreements on dimensions of integrity beyond that, but I think we both care deeply about not lying.
If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling)
I don’t really know what you mean by this. I don’t want carte blanche in how I spend money. I just want to be evaluated on my impact on actual AI risk, which is a priority we both share. You don’t have to approve of everything I do, and indeed think allowing people to choose their means by which to achieve a long-term goal, is one of the biggest reasons for historical EA philanthropic success (as well as a lot of the best parts of Silicon Valley).
A complete blacklist of a whole community seems extreme, and rare, even for non-EA philanthropists. Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time. Don’t blacklist work associated with a community on the basis of a disagreement about its optimal structure. You absolutely do not have to be part of a rationality community to fund it, and if you are right about its issues, that will be reflected in its lack of impact.
Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
I don’t really think this is a good characterization of the rationality community. It is true that the rationality community engages in heavy decoupling, where we don’t completely dismiss people on one topic, because they have some socially shunned opinions on another topic, but that seems very importantly different than inviting everyone who fits that description “into the fold”. The rationality community has a very specific epistemology and is overall, all things considered, extremely selective in who it assigns lasting respect to.
You might still object to that, but I am not really sure what you mean by the “inviting into the fold” here. I am worried you have walked away with some very skewed opinions though some unfortunate tribal dynamics, though I might also be misunderstanding you.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition)
As an example, I think OP was in a position to substantially reduce the fallout from FTX, both by a better follow-up response, and by having done more things in advance to prevent things like FTX.
And indeed as far as I can tell the people who had the biggest positive effect on the reputation of the ecosystem in the context of FTX are the ones most negatively impacted by these changes to the funding landscape.
It doesn’t seem very hard to imagine different ways that OP grantmaking could have substantially changed whether FTX happened in the first place, or at least the follow-up response to it.
I feel like an underlying issue here is something like “you feel like you have to personally defend or engage with everything that OP funds”.
You of course know better what costs you are incurring, but my sense is that you can just give money to things you think are good for the world, and this will overall result in more political capital, and respect, than the world where you limit yourselves to only the things you can externally justify or expend other resources on defending. The world can handle billionaires spending billions of dollars on yachts and luxury expenses in a way that doesn’t generally influence their other resources much, which I think suggests the world can handle billionaires not explaining or defending all of their giving-decisions.
My guess is there are lots of things at play here that I don’t know about or understand, and I do not want to contribute to the degree to which you feel like every philanthropic choice you make comes with social costs and reduces your non-financial capital.
I don’t want to drag you into a detailed discussion, though know that I am deeply grateful for some of your past work and choices and donations, and if you did ever want to go into enough detail to make headway on these disagreements, I would be happy to do so.
Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
You and I disagree on this, but it feels important to say we disagree on this. To me LessWrong has about a similar amount of edgelordism to the forum (not much).
Who are these “fringe opinion holders” brough “into the fold”. To the extent that this is maybe a comment about Hanania, it seems pretty unfair to blame that on rationality. Manifest is not a rationalist event significantly more than it is an EA one (if anything most of the money was from OP, not SFF right?).
To the extent this is a load-bearing part of your decision-making it just seems not true, rationalism isn’t getting more fringe, rationality seems to have about as much edginess as EA.
My view is that rationalists are the force that actively makes room for it (via decoupling norms), even in “guest” spaces. There is another post on the forum from last week that seems like a frankly stark example.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.
And of course a lot less positive attention. Indeed a very substantial fraction of the current non-OP funding (like Vitalik’s and Jed’s and Jaan’s funding) is downstream of these “weirder” things. “Less is less” requires there to be less, but my sense is that your actions substantially decrease the total support that EA and adjacent efforts will receive both reputationally and financially.
Can you say more about that? You think our prior actions caused additional funding from Vitalik, Jed, and Jaan?
We’re still going to be funding a lot of weird things. I just think we got to a place where the capital felt ~infinite and we assumed all the other inputs were ~infinite too. AI safety feels like it deserves more of those resources from us, specifically, in this period of time. I sincerely hope it doesn’t always feel that way.
I don’t know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world’s most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
more serious consideration from the world’s smartest people,
or more people respecting the integrity and honesty of the people working on AI Safety,
or the ability for people to successfully coordinate around the safe development of extremely powerful AI systems,
I highly doubt that the changes you made will achieve this, and am indeed reasonably confident they will harm them (in as much as your aim is to just have fewer people get annoyed at you, my guess is you still have chosen a path of showing vulnerability to reputational threats that will overall increase the unpleasantness of your life, but I have less of a strong take here).
In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas. Lightcone Infrastructure obviously works on AI Safety, but apparently not in a way that would allow Open Phil to grant to us (despite what seems to me undeniably large effects on thousands of people working on AI Safety, in myriad ways).
Based on the email I have received, and things I’ve picked up through the grapevine, you did not implement something that is best described as “reducing the number of core priority areas” but instead is better described as “blacklisting various specific methods, associations or conclusions that people might arrive at in the pursuit of the same aims as you have”. That is what makes me much more concerned about the negative effects here.
The people I know who are working on digital minds are clearly doing so because of their models of how AI will play out, and this is their best guess at the best way to make the outcomes of that better. I do not know what they will find in their investigation, but it sure seems directly relevant to specific technical and strategic choices we will need to make, especially when pursuing projects like AI Control as opposed to AI Safety.
AI risk is too complicated of a domain to enshrine what conclusions people are allowed to reach. Of course, we still need to have standards, but IMO those standards should be measured in intellectual consistency, accurate predictions, and a track record of improving our ability to take advantage of new opportunities, not in distant second-order effects on the reputation of one specific actor, and their specific models about political capital and political priorities.
It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here. I’m trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond “reputational risks”, which narrows the mind too much on what is going on here).
>> In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don’t really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
He recently made this comment on LessWrong, which expresses some of his views on the harm that OP causes.
Sorry, maybe I missed something, where did I imply you have a history of lying? I don’t currently believe that Open Phil or you have a history of lying. I think we have disagreements on dimensions of integrity beyond that, but I think we both care deeply about not lying.
I don’t really know what you mean by this. I don’t want carte blanche in how I spend money. I just want to be evaluated on my impact on actual AI risk, which is a priority we both share. You don’t have to approve of everything I do, and indeed think allowing people to choose their means by which to achieve a long-term goal, is one of the biggest reasons for historical EA philanthropic success (as well as a lot of the best parts of Silicon Valley).
A complete blacklist of a whole community seems extreme, and rare, even for non-EA philanthropists. Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time. Don’t blacklist work associated with a community on the basis of a disagreement about its optimal structure. You absolutely do not have to be part of a rationality community to fund it, and if you are right about its issues, that will be reflected in its lack of impact.
I don’t really think this is a good characterization of the rationality community. It is true that the rationality community engages in heavy decoupling, where we don’t completely dismiss people on one topic, because they have some socially shunned opinions on another topic, but that seems very importantly different than inviting everyone who fits that description “into the fold”. The rationality community has a very specific epistemology and is overall, all things considered, extremely selective in who it assigns lasting respect to.
You might still object to that, but I am not really sure what you mean by the “inviting into the fold” here. I am worried you have walked away with some very skewed opinions though some unfortunate tribal dynamics, though I might also be misunderstanding you.
As an example, I think OP was in a position to substantially reduce the fallout from FTX, both by a better follow-up response, and by having done more things in advance to prevent things like FTX.
And indeed as far as I can tell the people who had the biggest positive effect on the reputation of the ecosystem in the context of FTX are the ones most negatively impacted by these changes to the funding landscape.
It doesn’t seem very hard to imagine different ways that OP grantmaking could have substantially changed whether FTX happened in the first place, or at least the follow-up response to it.
I feel like an underlying issue here is something like “you feel like you have to personally defend or engage with everything that OP funds”.
You of course know better what costs you are incurring, but my sense is that you can just give money to things you think are good for the world, and this will overall result in more political capital, and respect, than the world where you limit yourselves to only the things you can externally justify or expend other resources on defending. The world can handle billionaires spending billions of dollars on yachts and luxury expenses in a way that doesn’t generally influence their other resources much, which I think suggests the world can handle billionaires not explaining or defending all of their giving-decisions.
My guess is there are lots of things at play here that I don’t know about or understand, and I do not want to contribute to the degree to which you feel like every philanthropic choice you make comes with social costs and reduces your non-financial capital.
I don’t want to drag you into a detailed discussion, though know that I am deeply grateful for some of your past work and choices and donations, and if you did ever want to go into enough detail to make headway on these disagreements, I would be happy to do so.
You and I disagree on this, but it feels important to say we disagree on this. To me LessWrong has about a similar amount of edgelordism to the forum (not much).
Who are these “fringe opinion holders” brough “into the fold”. To the extent that this is maybe a comment about Hanania, it seems pretty unfair to blame that on rationality. Manifest is not a rationalist event significantly more than it is an EA one (
if anything most of the money was from OP, not SFF right?).To the extent this is a load-bearing part of your decision-making it just seems not true, rationalism isn’t getting more fringe, rationality seems to have about as much edginess as EA.
My view is that rationalists are the force that actively makes room for it (via decoupling norms), even in “guest” spaces. There is another post on the forum from last week that seems like a frankly stark example.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.
Anyone know what post Dustin was referring to? EDIT: as per a DM, probably this one.