I live for a high disagree-to-upvote ratio
huw
Hmm. Not a super well-thought out take here, but it seems to me that Situational Awareness’ biggest crux is around whether an arms race dynamic would develop between the U.S. and China, and he lays out a few specific ways in which that might happen.
I don’t see any evidence of such an arms race taking place. China don’t have any frontier labs, only labs which distill other models. They haven’t yet produced a capable chip and seem at least a few years to half a decade off (much slower than Aschenbrenner’s predictions). They haven’t waged a state-sponsored cyberattack to steal model weights or algorithmic secrets—but I suppose you could argue it’s cheaper and easier to just distill in the short term?
In fact, given the ease of distillation and the proliferation of open-source models, it might be more reasonable to argue that such an arms race may not even occur, because it will be cheap and easy to access intelligence.
One reason this is important is because AOC is very likely to run for president in 2028, and has so far been quite judicious about which policies she chooses to publicly support and endorse.
This is either an attempt to test the waters on AI regulation, to see if it will become part of her platform, or she is already convinced it will be. If she runs, she will then be in a position to leverage this policy to convince other Democratic presidential candidates to adopt similar measures (or a rhetorical anti-AI framing). The other most likely candidate for president is Gavin Newsom, in whose state most of the leading AI companies are headquartered.
What would you say to a potential attendee who has a legitimate interest in reprogenetics’ emancipatory capacity, but is concerned that the conference will be taken over by discussions of human biodiversity, especially given that two of the featured speakers, Jonathan Anomaly and Steve Hsu, have both pretty clearly endorsed HBD or at least, given the ambiguities in their statements, never explicitly refuted it?
Would you be interested in screening out certain problematic attendees or explicitly refuting human biodiversity on the conference website, in order to create an environment welcoming of open discussion of reprogenetics?
(Can you point me to something about the moral weight of fish eggs? I have never heard of this before)
One other thing that feels missing from these comments, is that a more mature field has a bunch of other interesting discussion points. If all the philosophical questions in EA GHD were one day solved, we could still have invigorating debates about how to develop and manage interventions, about who the payer should be, etc. etc.
So I’m not sure this is all just a dearth of topics to discuss—perhaps the nuance is that this forum tends to like those more philosophical or intellectual discussions and those aren’t generally the kinds of debates most GHD practitioners I know are having?
To me wellbeing is the most exciting topic in EA GHD at the moment, because with some serious engagement from the kinds of players attending that workshop, it has the greatest potential to credibly upend the currently accepted wisdom in EA GHD. There are a lot of questions that you and others have been chipping away at for some time that many people assume are either solved or unlikely to yield field-altering results, and I think that impression is wrong!
To be frank—I don’t think it is possible to be confident about the impacts of AI on the labour market. Markets are very weird and respond to technological improvements in unpredictable ways; tautologically, if you could accurately predict them, you’d be able to file a leveraged trade and become a billionaire very quickly (and if you disagree with me—put your money where your mouth is and make that trade 😜). It can be easy when you’re feeling unsure of yourself to look up to those around you who seem more confident, but I would be careful about doing so. (Besides, if AGI does come before you graduate, you may well have bigger problems).
On a personal, mental level, I would try to ignore anyone who tells you to put people on a probability distribution. While it may or may not be technically true, almost any situation where I or friends have done this (careers, dating, politics, IQ) creates such a damaging second-order effect on your mentality and approach to the world that it’s usually better to just ignore it (an infohazard, if you will). I can see some effects of this in your post and the way you’re talking about yourself.
80,000 Hours’ advice used to be (unsure if it still is) to study very broadly, to give yourself optionality. This is especially good advice if (you + 8 years) may regret one choice or another. Besides, if you take the dual degree (rather than both in sequence), you can always drop out of one if you become more sure of yourself down the track—indeed, this is part of why universities offer this kind of flexibility.
Average income of CS graduates relative to average US individual income at the midpoint between now and HL-AGI
I don’t think it’s going to change much. Supply might slightly lower as AI tools make it easier for people to write code, but writing code ≠ developing software. Demand might slightly lower initially as existing firms find productivity improvements and markets demand cuts, but the demand for more software is still nearly infinite.A rush of new, cheap entry-level programmers from the Global South in the 2000s–2010s didn’t really depress wages at all.
I’m not an economist though so I’m probably not qualified to have a good opinion here. I’m speaking as a professional software engineer who has a deep familiarity with these tools.
One thing I didn’t expand on in that thread is some uncertainty I have around ‘You think your sacrificed money is best spent on the non-profit you are working for’.
Right now my charity is definitely not that cost-effective, but I’m confident it will be one day. In my head, saving money for this charity is the best way to spend that money, but not the most cost-effective today.
I don’t nearly have the arrogance to believe that my charity is going to be the most cost-effective giving opportunity of all time, so donating 100% of my sacrificed earnings to this charity probably goes against the spirit of the pledge. On the other hand, it does feel like something would be lost by not incentivising people to make this kind of sacrifice in their careers.
(But ultimately I don’t care much for the status of a pledge or whatever, because I know I’m doing the right thing here)
For these reasons I haven’t considered my sacrifice as a GWWC pledge so far, but I’m uncertain about it.
Given EA’s goals, I’d argue it’s okay to hold them to a high standard.
I would go further, and say that given CEA’s specific history and promises of change around sexual harassment[1], we should hold them to an even higher standard than that.
- ↩︎
CEA was and is a member organisation of EV UK, and the findings partially concerned CEA’s Community Health Team
- ↩︎
(I am glad that we have a lawyer to resolve any ‘I am not a lawyer but…’ comments on here)
I think that repeatedly re-opening discussions on any form of eugenics actively undermines the work many EAs are doing in the global south and severely risks our reputation and credibility as a movement in the global health space. Given the history of discussing this topic within EA, I do not believe that anyone in this community has the precision and tact to discuss proposals around eugenics without causing these harms, if it is even possible to do so at all (I do not believe it is).
I also believe that discussing eugenics on the forum undermines attempts to make EA more welcoming to a large number of racial groups, because of the association with forms of oppression and genocide against those groups. I believe that all of these harms persist even if you don’t specifically talk about where you might believe the existing differences in intelligence lie, because of that history. I believe that there are many people who would make fantastic EAs who are turned off of this movement because of this association.
I believe that members of the EA movement and its leaders should loudly and sharply condemn all forms of race science, human biodiversity, and more broadly, eugenics, because of these harms.
I am also, frankly, tired of having to write this comment every 6 months.
Hi, this has been discussed plenty of times before, often very controversially:
How to better advocate for genetic enhancement to the EA community
The Effective Altruist Case for Using Genetic Enhancement to End Poverty
An instance of white supremacist and Nazi ideology creeping onto the EA Forum
Here are two write-ups from Reflective Altruism, a criticism blog, on the EA Forum’s engagement with this topic area.
I think that more than enough ink has been spilled on this topic on this forum and I don’t see this post adding a lot to it. I think a better version of this post would engage with the existing discussion while treading very carefully around the impacts that discussing eugenics has on the goal of the EA Forum to be a welcoming and inclusive space for everyone. I will leave my object-level thoughts on your post in a different comment.
That may be the intended effect
Well, like, I don’t care who wrote the words but I do care about who took ownership of them. If an I happened to write in my style/voice and I reviewed it and posted it, would you consider that my writing or the AI’s?
Nuance, I’d be happy for an AI to write a draft, but (at this time) I will never publish something without a thorough review and strong work to put it in my own voice. I will never let a single AI-written word go unreviewed. (This is the same for ghostwritten posts made on my behalf, I don’t think AI changes much here).
Our authorial voice and trust that people put in our words is one of the few things we really have left that makes us human. When I catch it, I find reading AI-written (or ghostwritten) content gross and disturbing, because it signals to me that the author has no respect for my time, or their own humanity. I know that’s an extreme position but I find it hard to take any other one.
As I wrote before—if you’re considering applying for the guided self-help intervention, please reach out to us at Kaya Guides and DM me! Happy to share lots of context :)
I’m sure there’s some good money in it but Anthropic signed this deal around 8 months ago, when they were making substantially less money. I’m just not sure it’s worth the fight when other frontier labs have comparably performant models and substantially less moral qualms—why risk the walkouts and resignations?
If you’re interested in understanding more about the digital guided self-help program, please reach out to us at Kaya Guides! I can be found on DMs here :)
This is really nice, I really like it. Millenarianism feels all too easy to reach for in AI risk—as you note, there is a subtle self-satisfaction in predicting the end of the world that we have to be careful not to use as a crutch. In the world where we succeed, it will have been important to have done so pro-socially for the world after to have any chance of being worth living in.