I don’t really know… I’m suspect some kind of first-order utility calculus which tallies up the number of agents which are helped per dollar weighted according to what species they are makes animal welfare look better by large degree. But in terms of getting the world closer on the path of the “good trajectory”, for some reason the idea of eliminating serious preventable diseases in humans feels like a more obvious next step along that path?
Xavier_ORourke
Not really a question but… if you guys ever released a piece of merch that was insanely expensive but most of the cost went to charity (e.g. some special edition $3000 Mr Beast branded Tshirt where you guys give 3k to GiveDirectly for every unit sold), I’d wanna buy them for all my friends.
A priori, what is the motivation for elevating the very specific “biological requirment” hypothesis to the level of particular consideration? Why is it more plausible than than similarly prosaic claims like “consciousness requires systems operating between 30 and 50 degrees celsius” or “consciousness requires information to propegate through a system over timescales between 1 millisecond and 1000 milliseconds” or “consiousness requires a substrate located less than 10,000km away from the center of the earth”?
It seems a little weird to me that most of the replies to this post are jumping to the practicalities/logistics of how we should/shouldn’t implement official, explicit, community-wide bans on these risky behaviours.
I totally agree with OP that all the things listsed above generally cause more harm than good. Most people in other cultures/communities would agree that they’re the kind of thing which should be avoided, and most other people succeed in avoiding them without creatiing any explicit institution responsible for drawing a specific line between correct/incorrect behavior or implementing overt enforcment mechanisms.
If many in the community don’t like these kind of behaviours, we can all contribute to preventing them by judging things on a case-by-case basis and gently but firmly letting our peers know when we dissaprove of their choices. If enough people softly disaprove of things like drug use, or messy webs of romantic entanglement—this can go a long way towards reducing their prevalance. No need to draw bright lines in the sand or enshrine these norms in writing as exact rules.
Sorry I might not have made my point clearly enough. By remaining anonymous, the OP has shielded themselves from any public judgement or reputational damage. Seems hypocritical to me given the post they wrote is deliberately designed to bring about public judgement and affect the reputation of Nick Bostrom.
So I’m saying “if OP thinks it’s okay to make a post which names Nick and invites us all to make judgements about him, they should also have the guts to name themselves”
I really don’t think the crux is people who disagree with you being unwilling to acknowledge their unconscious motivations. I fully admit that sometimes I experience desires to do unsavory things such as
- Say something cruel to a person that annoys me
- Smack a child when they misbehave
- Cheat on my taxes
- Gossip about people in a negative way behind their backs
- Eat the last slice of pizza without offering it to anyone else
- Not stick to my GWWC pledge
- Leave my litter on the ground instead of carrying it to a bin
- Lie to a family member and say “I’m busy” when they ask me to help them with home repairs
- Be unfaithful to my spouse
- etc.If you like, for sake of argument let’s even grant that for all the nice things I’ve ever done for others, ultimately I only did them because I was subconsciously trying to attract more mates (leaving aside the issue that if this was my goal, EA would be a terribly inefficient means by which to achieve it).
Even if we grant that that’s how my subconscious motivations are operating, it still doesn’t matter. It’s still better for me to not go around hitting on women at EA events, and the EA movement is still better off if I’m incentivised not to do it.
Maybe all men have have a part of ourselves which wants to live the life of Genghis Khan and torture our enemies and impregnate every attractive person we ever lay eyes on—but if that were true, that wouldn’t imply it’s ethical or rational to indulge that fantasy! And it definitely wouldn’t imply that the EA project would be better off if we designed our cultural norms+taboos+signals of prestige in ways which encourage it.The better I am at not giving in to these shitty base urges, and the more the culture around me supports and rewards me for not doing these degenerate things, the happier I will be in the long run and the more positive the impact I have on those around me will be.
If EA community organisers are ending up isolated from everyone not involved in EA, that a really big problem!
Also the claim that
”We’ll end up as lonely, dispirited incels rowing our little boats around in circles, afraid to reach out, afraid to fall in love.”
Srikes me as patently false given myself and many people I know personally who engage with EA have partners from outside the EA community
The main reason I disagree is that to me it seems plainly obvious that it’s far better for a community organiser’s motivations to be related to earning respect/advancing their career/helping others, rather than their reason for participating in EA being so they can have more sex. This is because, if they’re motivated by wanting to have more sex, then this predictably leads to more drama and more sexual harrassment.
I also don’t think you did enough to back up the inference “lots of people are motivated by sex, therefore we should try to harness this, instead of encouraging people to suppress these instincts in problematic contexts”.
As a comparison, lots of people get excited by conflict and gossip too. That doesn’t automatically mean we should be trying to harness, rather than suppress those things
Why would it be a problem for long-term community builders?
Anecdote: I used to help run a local university group in Australia. While helping run that group, I didn’t try to date or sleep with attendees. Also while running that group, I met wonderful woman in a separate context who wasn’t involved in the EA community, we entered into a relationship, and are now happily married and expecting a child.
I’ve also got lots of EA friends who’ve done community building in the past and are in really happy romantic relationships with spouses they met in a non-EA context as well.
If Bostrom is not entitled to protection from random people on the forum making judgments about him, why should it be any different for OP?
From where I sit -It’s really hard to guess at all the details and relevant context of what’s going on (which is why I feel a bit stupid commenting on it… but I guess I can’t resist lol).
Is FHI the only org being subject to a hiring freeze? Or is the university/philosophy department cutting costs in many places? Are conflicts with the philosophy department basically FHI’s fault? Or is the bureaucracy dysfunctional/unfriendly to FHI in ways which made it impossible to keep them happy without making other costly tradeoffs? If Nick steps down as director, is there somebody else waiting in the wings who is likely to do a better job and successfully resolve the issue?
The only thing I know for sure looking in from the outside is that FHI has been doing really really great work 🤷
If Bostrom did step down as FHI director, who is likely to replace him? How confident are you that a new director will succeed in resolving conflicts with the broader philosophy department?
I have very little direct experience with FHI (just a very brief internship) but from the outside it looks like FHI has produced some really amazing research while Bostrom has directed it.
Perhaps a good way to appraise whether FHI has been performing above/below par during Bostrom’s directorship is to compare its output to a similar organisation such as Global Priorities Institute. How would you compare the value of work done by FHI versus GPI? I don’t know enough to be confident in this, but to me it seems like FHI has generated far more value (not that Bostrom is the only person to thank for this, but it seems like an important piece of evidence).
In any case—the views and oppinions of random users of this forum like me who aren’t directly involved with FHI don’t mean much, and I don’t really see the benefits of raising this question in public on the EA forum.
We should be careful to avoid dismissing a simple easy solution to a real problem because it might fail to solve an imaginary one. Do you really think the community currently has a problem with bosses pressuring their direct reports to help them move house?
How do you know we don’t live in a world where >90% of the problem is specifically due to people having sex / trying to have sex with each other? What would convince you that sex is the culprit, rather than interpersonal relationships in general?
In my oppinion—a very attractive compromise which many other cultures adopt is to keep everythign you love about the deep relationships except for the sex. People having sex with each other is uniquely prone to causing harm+drama+conflict.
I don’t think we’ll ever see a TIME article exposing the problem that someone in EA had too many people offer to help them move house, or that community events were filled with too much warmth and laughter, or that people offered too much emotional support to someone when they lost a parent.
More friendship and loyalty and support and love and fun and shared moments of vulnerability is fine! Just leave out the sex part!
Thanks! Now that SBF has been disavowed do you think EA still has a big problem with under-emphasising conflicts of interest?
I still think the best critiques benefit from being extremely concrete and that article could have had more impact if it spent less time on the high-level concept of “conflicts of interest” and more time explicitely saying “crypto is bad and it’s a problem that so many in the community don’t see this”
Dear authors—could you please provide at least one concrete example of a high-quality “deep critique” of Effective Altruism which you think was given inadequate consideration?
If there was no difference at all between the beliefs/values/behaviours of a the average member of this community, versus the average member of the human species—then there would be no reson for the concept “Effective Altruism” to exist at all.
It would be a terrible thing for our community to directly discriminate against traits which are totally irrelevant to what someone has offer to the EA project (such as race/gender/sexual preference) - and I’ve never heard anyone around here disagree with that.
But when it comes to a traits such as being highly intelligent, not being a political extremist, or having intellectual curiosity about any part of the universe other than our comparitively tiny planet (aka “thinks space is cool”) - having these traits be over-represented in the community is an obviously good thing!
Dear authors, if you think the community at large has the wrong idea about moral philosophy, I think the best response is to present compelling arguments which criticize utilitarianism directly!
If you think the community at large has the wrong economic/political beliefs, please argue against these directly!
Or if you think there is a a particular organisation in the movement is making a particular mistake which they wouldn’t have made had they consulted more domain experts, please lay out a compelling case for this as well!
This is consistent with the point I’m trying to make—all human interactions in all contexts are happening within a super complex web of norms and taboos, and any proposal as simple as “just let people do whatever they want” is a non-starter
I think an important consideration being overlooked is how comptetntly a centralised project would actually be managed.
In one of your charts, you suggest worlds where there is a single project will make progress faster due to “speedup from compute almagamation”. This is not necessarily true. It’s very possible that different teams would be able to make progress at very different rates even if both given identical compute resources.
At a boots-on-the-ground level, the speed of progress an AI project makes will be influenced by thosands of tiny decisions about how to:
Manage people
Collect training data
Prioritize research direcitons
Debug training runs
Decide who to hire
Assess people’s perfomance and decide to should be promoted to more influential positions
Manage code quality/technical debt
Design+run evals
Transfer knowledge between teams
Retain key personnel
Document findings
Decide what internal tools to use/build
Handle data pipeline bottlenecks
Coordinate between engineers/researchers/infrastructure teams
Make sure operations run smoothly
The list goes on!
Even seemingly minor decisions like coding standards, meeting structures and reporting processes might compound over time to create massive differences in research velocity. A poorly run organization with 10x the budget might make substantially less progress than a well-run one.
If there was only one major AI project underway it would probably be managed less well than the overall best-run project selected from a diverse set of competing companies.
Unlike the Manhattan project—there’s already sufficently strong commercial incentives for private companies to focus on the problem, it’s not already clear exactly how the first AGI system will work, and capital markets today are more mature and capable of funding projects at much larger scales. My gut feeling is if AI was fully consolidated tomorrow—this is more likely to slow things down than speed them up.