ML safety researcher, working on NN interpretability.
Adrià Garriga Alonso
More than commercial, my understanding from purely public documents is that it was societal pressures.
But I agree with you two on the spirit.
I was scared when seeing the title. Then I read a little bit more:
That is, the entire good of humanity may be outweighed by the cumulative suffering of farmed animals, with total animal suffering growing faster than human wellbeing is increasing, especially in recent decades
I already thought that was the case. It’s really sad, but at least it’s not another massive source of suffering to add to my list. Thank you for substantiating this with a calculation!
In my read, this post is not about whether having children (literal ‘pro’ ‘natalism’) is correct or not. I think having a debate about that is great, and I’m inclined towards the ‘yes’ side.
It’s about pointing to signs suggesting the existence of power-seeking faction within EA, that (by their own admission) is attempting to coopt the movement for their own aims.
(Why the hedging in the previous paragraph: stating that your faction is “now 100X more likely to become a real, dominant faction” is not quite stating your intention to make it dominate, it just suggests it.)
I think coopting a broad altruism movement for particular people’s pet causes (or selfish aims) is bad. There is a very real possibility that EA will be coopted by power-seekers and become another club for career-conscious people who want to get ahead. I support good attempts to prevent that.
This pointing is asymmetrical with respect to the question of whether the purported ‘faction’ in question is in fact a faction, and is in fact trying to coopt the movement.
Work trials (paid, obviously) are awesome for better hiring, especially if you’re seeking to get good candidates that don’t fulfill the traditional criteria (e.g. coming from an elite US/UK university). Many job seekers don’t have a current employment.
Living with other EAs or your coworkers is mostly fine too, especially if you’re in a normal living situation, like most EA group houses are.
These suggestions aren’t great. I agree with the “Don’t date” ones, but these were already argued for before.
Great thanks, I’ve set up a recurring donation!
EDIT: apparently they’re very time-constrained, so I’ll give $13.3k as a lump sum instead.
I would say you should just donate it now. Gift Aid is just very efficient, and we have plenty of effective interventions in all these areas to do now.
For x- and s-risk and global development (areas that benefit from research and can accumulate knowledge) the time of highest leverage is plausibly now, and the earlier the better.
The report you link says that the largest cause of later cost-effectiveness is “exogenous learning”, i.e. research that happens regardless and makes marginal interventions more effective. If that is the case, why not invest in the learning itself, to make effective interventions for everyone appear closer in time?
I’m less sure about animal welfare. Research affects it somewhat but it’s mostly advocacy, and I’m not sure how that accumulates over time. I think you should give now too.
No, I think your table is substantially better than chatgpt’s because it factors out the two alignment dimensions into two spatial dimensions.
Very cool, I didn’t actually believe that other regulatory regimes emulated the EU, but I believe it a little bit now. The large number of GDPR emulations surprised me.
One thing I don’t quite get
This complicated architecture has also had a 5.2% growth rate in all its bodies combined, with most of the staff being highly educated (usually possessing a master’s degree).
This is a growth in the number of staff?
Both of these factors resulted in a signal of competence to other countries in the world, which results in a higher degree of trust in the EU’s decisions and policies.
Why does having more staff signal competence? Is it because more budget implies the agencies are being taken seriously?
No, getting rid of factory farming (“fiat iustitia”) won’t increase X-risk (“pereat mundus”).
Or are you implying that resources are in competition for the two? (Perhaps weakly true)
Yes.
thank machine doggo
Literally everyone knows he was the Masculine Mongoose. Superheros don’t even try to hide their identity any more.
Oops, thank you for the correction! My mistake. I still like “EA workshop” more, since attendees are thinking about their life plans and working on improving them.
Seminar is also pretty religious. I very much like “EA workshops”
In 1951, Alan Turing argued that at some point computers would probably exceed the intellectual capacity of their inventors, and that “therefore we should have to expect the machines to take control.” Whether that is a good or a bad thing depends on whether the machines are benevolent towards us or not. (Partial source: https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom )
[for policy makers]
It is a mistake to assume that AI researchers are driven by the positive consequences of their work. Geoffrey Hinton, winner of a Turing Award for his enormous contribution to deep neural networks, is not optimistic about the effects of advanced AI, or whether humans can decide what it does. In a 2015 meeting of the Royal Society, he stated that “there is not a good track record of less intelligent things controlling things of greater intelligence”, and that “political systems will use [AI] to terrorize people”. Nevertheless, he presses on with his research, because “the prospect of discovery is too sweet”.
(source for the quotes: https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom )
[for any audience]
The chief asset of the human species is our intelligence: with it, we have settled all over and transformed the world. Most machine learning researchers expect AI to surpass human intelligence in all areas within a lifetime (source ). When that happens, humanity will find ourselves in the same place as chimpanzees: with our fate at the mercy of the most intelligent species. As deep learning Geoffrey Hinton noted, “there is not a good track record of less intelligent things controlling things of greater intelligence”.
This seems better than having to make an entirely new dating site.
It is unclear to me if this is a good idea. Sci-hub is great, but whoever does this would face a good amount of legal risk. If EA organisations (eg American ones) are known to be funding this, they face the risk of lawsuits and reputational damage.
I think at least this post should not be publicized too widely. Maybe nobody else commented on this post for precisely this reason?
What does the WANBAM acronym (assuming it is one) stand for? Presumably Women And Non Binary… Altruism Movement?
(apologies if the question is irrelevant but I’m very curious, and I couldn’t find this in the post or the website)
I did it too, thank you so much for posting! I did not use your templates at all, it is easier to stream-of-consciousness write about why I think cages are bad.