I’d advise any sufficiently large grantee to speak to a lawyer on this.
Peter Berggren
I wonder why CEA feels the need to comment on what seems to be a personal matter not relating to CEA programming. While I understand how seductive it can be to criticize someone who has said something reprehensible, especially when brought to light with a clumsily worded apology, I wonder if this really relates to CEA, or whether this would have been a good time to practice the Virtue of Silence.
For the record, if anyone is willing to coordinate a split of global poverty and animal rights EAs who wish to improve their optics, even at the expense of epistemics, from EA as a whole, I would gladly be willing to assist, despite not being in that group. Let me know if anyone wants help on this.
Would highly recommend Ozy’s piece on the subject:
https://thingofthings.substack.com/p/three-difficulties-with-trying-to
The second video seems really interesting to me, as someone who’s into moral philosophy. The first video personally falls into “it’s bad on purpose to make you click” territory, though.
A Brief Argument for Rapid Movement Growth
Thanks :3
This is my first post, and I’m so happy to see that you appreciate it. I’ll try to address the “dilution” concern in more depth in my later posts.
Graphic Design is Our Passion
The original joke did not recognize the individuality of all shrimps. I made sure to use the right shribboleth here.
This was an announcement by the False Emperor.
Bayesed.
I am. Which organization is lobbying on that, because I’d be happy to join?
I really appreciate this level of openness about possible changes, even though I disagree with almost every suggestion made here. I think that EA is chronically lacking in coordination and centralized leadership, and that its primary failures of late (obsessive self-flagellation, complete panic over minor incidents) could be resolved by a more coordinated strategy. As such, I feel that the “market” structure will collapse in on itself fairly quickly if we do not fix our organizational culture to stop panic spirals.
However, I do have a suggestion for resolving the monopsony issue. CEA and other movement-building organizations should focus large amounts of active fundraising effort on other billionaires (similarly to what many other charities do behind the scenes), and the community should become more supportive of earning-to-give (as many supposed “talent constraints” can in fact be resolved with enough hiring).
The Bostrom email situation, and the Tegmark grant proposal situation, both seem very minor to me, at least compared to many other things that have happened to EA in the past with the same amount of panic or less.
BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism?
I didn’t mean to imply that Emile Torres didn’t think that this was an extinction risk. I’m sorry that I misspoke on that part.
I never denied that they have published their arguments in many places. I just can’t find any such arguments that are object-level.
As an aside, the idea that we should prioritize optics over intellectually honest exploration of the epistemic landscape is deeply harmful to effective altruism as a whole.
It seems to me that, while the form/meaning distinction in this paper is certainly a fascinating one if your interests tend towards philosophy of language, this has very little to say about supposed inherent limitations of language models, and does not affect forecasts of existential risk.
A bit more context would probably be appreciated, for anyone who does not know the intricacies of the Bernie Madoff story.