I live for a high disagree-to-upvote ratio
huw
To me wellbeing is the most exciting topic in EA GHD at the moment, because with some serious engagement from the kinds of players attending that workshop, it has the greatest potential to credibly upend the currently accepted wisdom in EA GHD. There are a lot of questions that you and others have been chipping away at for some time that many people assume are either solved or unlikely to yield field-altering results, and I think that impression is wrong!
To be frank—I don’t think it is possible to be confident about the impacts of AI on the labour market. Markets are very weird and respond to technological improvements in unpredictable ways; tautologically, if you could accurately predict them, you’d be able to file a leveraged trade and become a billionaire very quickly (and if you disagree with me—put your money where your mouth is and make that trade 😜). It can be easy when you’re feeling unsure of yourself to look up to those around you who seem more confident, but I would be careful about doing so. (Besides, if AGI does come before you graduate, you may well have bigger problems).
On a personal, mental level, I would try to ignore anyone who tells you to put people on a probability distribution. While it may or may not be technically true, almost any situation where I or friends have done this (careers, dating, politics, IQ) creates such a damaging second-order effect on your mentality and approach to the world that it’s usually better to just ignore it (an infohazard, if you will). I can see some effects of this in your post and the way you’re talking about yourself.
80,000 Hours’ advice used to be (unsure if it still is) to study very broadly, to give yourself optionality. This is especially good advice if (you + 8 years) may regret one choice or another. Besides, if you take the dual degree (rather than both in sequence), you can always drop out of one if you become more sure of yourself down the track—indeed, this is part of why universities offer this kind of flexibility.
Average income of CS graduates relative to average US individual income at the midpoint between now and HL-AGI
I don’t think it’s going to change much. Supply might slightly lower as AI tools make it easier for people to write code, but writing code ≠ developing software. Demand might slightly lower initially as existing firms find productivity improvements and markets demand cuts, but the demand for more software is still nearly infinite.A rush of new, cheap entry-level programmers from the Global South in the 2000s–2010s didn’t really depress wages at all.
I’m not an economist though so I’m probably not qualified to have a good opinion here. I’m speaking as a professional software engineer who has a deep familiarity with these tools.
One thing I didn’t expand on in that thread is some uncertainty I have around ‘You think your sacrificed money is best spent on the non-profit you are working for’.
Right now my charity is definitely not that cost-effective, but I’m confident it will be one day. In my head, saving money for this charity is the best way to spend that money, but not the most cost-effective today.
I don’t nearly have the arrogance to believe that my charity is going to be the most cost-effective giving opportunity of all time, so donating 100% of my sacrificed earnings to this charity probably goes against the spirit of the pledge. On the other hand, it does feel like something would be lost by not incentivising people to make this kind of sacrifice in their careers.
(But ultimately I don’t care much for the status of a pledge or whatever, because I know I’m doing the right thing here)
For these reasons I haven’t considered my sacrifice as a GWWC pledge so far, but I’m uncertain about it.
Given EA’s goals, I’d argue it’s okay to hold them to a high standard.
I would go further, and say that given CEA’s specific history and promises of change around sexual harassment[1], we should hold them to an even higher standard than that.
- ↩︎
CEA was and is a member organisation of EV UK, and the findings partially concerned CEA’s Community Health Team
- ↩︎
(I am glad that we have a lawyer to resolve any ‘I am not a lawyer but…’ comments on here)
I think that repeatedly re-opening discussions on any form of eugenics actively undermines the work many EAs are doing in the global south and severely risks our reputation and credibility as a movement in the global health space. Given the history of discussing this topic within EA, I do not believe that anyone in this community has the precision and tact to discuss proposals around eugenics without causing these harms, if it is even possible to do so at all (I do not believe it is).
I also believe that discussing eugenics on the forum undermines attempts to make EA more welcoming to a large number of racial groups, because of the association with forms of oppression and genocide against those groups. I believe that all of these harms persist even if you don’t specifically talk about where you might believe the existing differences in intelligence lie, because of that history. I believe that there are many people who would make fantastic EAs who are turned off of this movement because of this association.
I believe that members of the EA movement and its leaders should loudly and sharply condemn all forms of race science, human biodiversity, and more broadly, eugenics, because of these harms.
I am also, frankly, tired of having to write this comment every 6 months.
Hi, this has been discussed plenty of times before, often very controversially:
How to better advocate for genetic enhancement to the EA community
The Effective Altruist Case for Using Genetic Enhancement to End Poverty
An instance of white supremacist and Nazi ideology creeping onto the EA Forum
Here are two write-ups from Reflective Altruism, a criticism blog, on the EA Forum’s engagement with this topic area.
I think that more than enough ink has been spilled on this topic on this forum and I don’t see this post adding a lot to it. I think a better version of this post would engage with the existing discussion while treading very carefully around the impacts that discussing eugenics has on the goal of the EA Forum to be a welcoming and inclusive space for everyone. I will leave my object-level thoughts on your post in a different comment.
That may be the intended effect
Well, like, I don’t care who wrote the words but I do care about who took ownership of them. If an I happened to write in my style/voice and I reviewed it and posted it, would you consider that my writing or the AI’s?
Nuance, I’d be happy for an AI to write a draft, but (at this time) I will never publish something without a thorough review and strong work to put it in my own voice. I will never let a single AI-written word go unreviewed. (This is the same for ghostwritten posts made on my behalf, I don’t think AI changes much here).
Our authorial voice and trust that people put in our words is one of the few things we really have left that makes us human. When I catch it, I find reading AI-written (or ghostwritten) content gross and disturbing, because it signals to me that the author has no respect for my time, or their own humanity. I know that’s an extreme position but I find it hard to take any other one.
As I wrote before—if you’re considering applying for the guided self-help intervention, please reach out to us at Kaya Guides and DM me! Happy to share lots of context :)
I’m sure there’s some good money in it but Anthropic signed this deal around 8 months ago, when they were making substantially less money. I’m just not sure it’s worth the fight when other frontier labs have comparably performant models and substantially less moral qualms—why risk the walkouts and resignations?
If you’re interested in understanding more about the digital guided self-help program, please reach out to us at Kaya Guides! I can be found on DMs here :)
Have you considered trying to offer this on food delivery apps? India has a lot of infrastructure for making it easy to find vegetarian products in their apps, I’ve found it very useful :)
(Even in the roles where it has produced productivity improvements, such as programming, that doesn’t necessarily imply job loss, as companies could get more ambitious with their existing budgets)
Are there also just, concerns about misinterpretation? There’s not really a good way of checking baselines on hallucinations or unconfident predictions from the AI, since 99% of humans don’t know what these sounds mean.
Furthermore, since these seem to be based on the behaviours humans observe co-occurring with the communication, they’d necessarily be lower-fidelity than that animal’s thought process (as you note). That seems a bit lame and the website certainly isn’t trying to dispel their own mythmaking around ‘talk to animals’, which isn’t really what’s happening here in any meaningful sense.
Hmm, I think that’s not the right framing for this. UBI is just not settled as a universally good idea in academic or political circles (sorry, no definitive citation for this), let alone that there’s an urgent unemployment crisis (the statistic I think you’re citing is for job openings, not actual employment rates) or that such a crisis, if it did exist, has structural causes which could be expected to increase (i.e. it might not be AI, nor should we necessarily expect AI to become orders of magnitude more advanced in the next 5 years; there was plausibly a very different shock to the global economic system beginning around Liberation Day, 2025).
I’d also be curious about whether Abundance money could fund this, too. Urban sprawl is a big driver of habitat destruction!
One other thing that feels missing from these comments, is that a more mature field has a bunch of other interesting discussion points. If all the philosophical questions in EA GHD were one day solved, we could still have invigorating debates about how to develop and manage interventions, about who the payer should be, etc. etc.
So I’m not sure this is all just a dearth of topics to discuss—perhaps the nuance is that this forum tends to like those more philosophical or intellectual discussions and those aren’t generally the kinds of debates most GHD practitioners I know are having?