Given that debating race and IQ would make EA very unwelcoming for black people
Or white people.
Given that debating race and IQ would make EA very unwelcoming for black people
Or white people.
Is there any chance you could reconsider? This post is not about my personal politics, or advocacy about any political candidates.
It’s about the perceived misuse of an EA-linked fund called Protect Our Future.
I could only find one, Robert Menendez, with positions that might be deemed anti-crypto before the FTX collapse. He sponsored some anti-money laundering some bills taking aim against Russia, Venezuela, El Salvador https://www.coinbase.com/public-policy/legislative-portal/nj/senate/n6vBNOTG2gwWQ1c3jOQqn
A handful of these candidates have also become anti-crypto post-FTX collapse, and beginning to return/donate the money recevied from FTX/SBF. Chuy Garcia is one example.
Strongly agree. I definitely would like to see more content on neartermist causes/ careers. But importantly, I would like to see this content contributed by authors who hold neartermist views and can give those topics justice. Whilst I am appreciative of 80,000 Hours and GWWC attempting to accomodate longtermism-skeptics with some neartermist content, their neartermist content feels condescending because it doesn’t properly showcase the perspectives of Effective Altruists who are skeptical of longtermist framings.
I also personally worry 80,000 Hours is seen as the “official EA cause prioritisation” resource and this:
alienates readers with different views and conclusions,
does not show that the EA community has diverse and varied views,
has misled readers into thinking there is an “official EA position” on best careers/ cause areas
Having more neartermist content will help with this, but I also would like to see 80,000 Hours host content from authors with clashing views. E.g., 80,000 makes a very forceful case that Climate Change is not a material X-Risk, and I would like to see disagreeing writers critique that view on their site.
I also think you hit the nail on the head about many readers being unreceptive to longtermism for concerns like tractability, and that is entirely valid for them.
I’m surprised you didn’t bring up the most commonly cited defintions of capitalism and socialism.
Capitalism is the private ownership of the means of production.
Socialism is the public ownership of the means of production.
Whilst there’s a great deal of variation in how people envision their form of captialism/ socialism, the above are the generally agreed upon dictionary definitions of the two economic systems.
The CCC has high standards of research
Would you be able to point to something backing up this claim? Just a word of caution because I don’t believe this to be true (as I explain below).
Lomborg’s name might be familiar (or infamous) those of us in Australia where he was at the centre of a big political scandal, where the conservative government at the time (then climate-skeptics) was perceived to be pushing universities to host the Copenhagen Centre and seen as political intereference into the academic system.
Lomborg has been described as a climate contrarian in Science:
Once the darling of Australia’s conservative government, controversial climate contrarian Bjørn Lomborg has lost his Down Under caché—and cash.
Australia’s Climate Council is critical of him:
https://www.climatecouncil.org.au/resources/the-low-down-on-lomborg/
And has made bunk claims on Australian bushfires:
https://iceds.anu.edu.au/news-events/news/controversial-commentator-bjorn-lomborgs-bushfire-claim-debunked
I do understand that all this criticism is centred on Lomborg/ his centre’s views on climate, which is separate to the the cause areas you bring up with e-procurement and land tenure. But, his track record on climate does make me cautious about their reputability.
Very eloquent. I do think the perception is justified, e.g. SBF’s attempt to elect candidates to the US Congress/Senate.
Is there anything that makes you skeptical that AI is an existential risk?
Influential EA philosophers having used racial slurs and saying they’re unsure about IQ and race is hurtful to black EAs, hurtful to black people outside EA and bad for future diversity in EA.
It’s also pseudoscience.
Is this really a fair description of IR Realism?
Mearsheimer, to his credit, was able to anticipate the Russian invasion of Ukraine. If his prescriptions were heeded to sooner, perhaps this conflict could have narrowly been avoided.
You could just as easily argue that Mearsheimer’s opponents have done more to enable the Russians.
I’m not saying I agree with Mearsheimer or understand his views fully, but I’m grateful his school of thought exists and is being explored.
I’m not the author, but there was a very prescient critique submitted to the EA criticism contest, that went underappreciated. https://medium.com/@sven_rone/the-effective-altruism-movement-is-not-above-conflicts-of-interest-25f7125220a5
UPDATE: actually I realised did specifically mention this critique as an example.
The diagonal entries are defined in another way. See the link:
Apologies, I could have made this clearer. It is only those diagonal entries which are allowed to be negative. In fact they must be negative (or zero).
Technically, the diagonal entries are transitions from state i to i (ie. they are not really transitions but rather a measurement of “retention”). You can think of the positive or negative sign as indicating if it is a measure of transitioning away from the state, or retention in the state.
The harsh crticism of EA has only been a good thing, forcing us to have higher standards and rigour. We don’t want an echochamber.
I would see it as a thoroughly good thing if Open Philanthropy were to combat the protrayal of itself as a shadowy cabal (like in the recent politico piece) for example by:
Having more democratic buy-in with the public
e.g. Having a bigger public presence in media, relying on a more diverse pool of funding than (i.e. less billionarie funding)
Engaged in less political lobbying
More transparent about the network of organisations around them
e.g. from the Politico article: ”… said Open Philanthropy’s use of Horizon … suggests an attempt to mask the program’s ties to Open Philanthropy, the effective altruism movement or leading AI firms”
Thank you Stephen for your long engagement with this topic, because I do think it is a very real risk that Effective Altruists should pay more attention to.
In addition to the actions you proposed, I also wanted to suggest there might be promising actions in reducing conflicts of interests that incentivise conflict and escalate tensions. The high amounts of political lobbying, sponsoring of think tanks and universities, by weapons companies creates perverse inventives.
I have been very impressed by the work of the Quincy Institute to bring attention to this issue, and to explore diplomatic options as alternatives to conflict. I would love to see 80000 Hours promote them on their job board or interviewed.
I’ve written to my local MPs about banning contributions from weapons makers (Lockheed Martin, Boeing etc...) to the Australian Gov’t military think tank ASPI. Here in Australia the recent AUKUS security pact has seen an enormous increase in planned military spending and sparked some discussion on the forum. I am trying to raise this as an issue/cause area to explore amongst Aussie EAs.
I think it’s fair for Davis to characterise Schmidt as a longtermist.
He’s recently been vocal about AI X-Risk. He funded Carrick Flynn’s campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.
Just wanting to express my shared disappointment with how parts of this community embraced crypto/ gambling etc. as Gemma points out in her post.
If this is the case that MacAskill cannot be forthcoming for valid reasons (opening himself up to legal vulnerability), as a community it would still make sense for us to err on the side of caution and have other leaders for this community as Chris argues for.