This is why organizations need diversity, because this is exactly what happens when you have an organization that has the western developed mindset. The pallete of ideas and viewpoints is clearly lacking.
The AGI delivery by 2030 will fail, we barely have resources to properly run these current LLM models, and the AGI will surely be much more complex, if possible even. Congratulations you have now jumped on a trend-wagon that will take us nowhere, while forgetting about actual causes that are relevant right now.
It’s a shame that the same people that built you up, shaped you as an organization and gave you credibility, are now put on the pier, left behind.
I see you’re getting downvoted and disagreed with without any direct interaction, so I’ll bite.
diversity… western developed mindset… clearly lacking
I think this is a bit combative in how it’s delivered. I suspect a number of people actually agree with the ‘echo chamber’ problem—it seems like a number of the other comments more or less say something like this in how they disagree with 80k’s conclusions.
However, you might need to elaborate on the “western” aspect. What do you think is the shortcoming you’re identifying here? For instance, I expect a western lens is the right one for this problem because most of the frontier labs are western.
AGI delivery by 2030 will fail… AGI will surely be much more complex, if possible even
I think this is a controversial call—some agree, some disagree. But I don’t think anyone can say this with confidence: for instance, the frontier lab leaders all seem to be saying this, in a way that doesn’t seem to be explained by “hype” or “pandering to investors”. This would be a good one to elaborate on your views / provide a link to the strongest evidence you buy into.
we barely have resources to properly run these current LLM models
I think this is just not true? Maybe we don’t have the resources to run them personally, but plenty of companies seem to be running their latest models for (limited) public access just fine. Are you addressing these inequality considerations or something else?
Congratulations you have now jumped on a trend-wagon that will take us nowhere, while forgetting about actual causes that are relevant right now
I think I’ve mostly responded to this, but I’d like to connect it with the “left behind” point at the end:
I think it’s good, and highly in keeping with EA principles, to (a) respond to information as it comes in and (b) think and work on the margin. This is probably too big to tackle in this comment—maybe I’ll write something up later—but
changing advice on the margin (for the next person) isn’t by necessity the same as ‘leaving people behind’
I would be interested in why or whether people who agree with the advice / 80k’s judgment feel they should stay where they are, instead of also pivoting?
EA as a whole acts globally, but the majority of big EA players are western based and I believe there is a big gap between what reality is to the average human on the planet and the average EA participant. Yes in terms of AGI, it’s a western game mostly, but 80,000 hours wasn’t an AI-focused organization it was an EA-focused organization. I think this is somewhat like putting all the eggs in one basket, but it’s one Fabergé egg that promises a lot of potential, but could be rotten on the inside. In my opinion AGI is somewhere around the 3-rd place for “biggest global issues” right now. First one for me is Climate Change, and the second one is Food production systems security. Again this aspect is inherently not-western because Western Europe and the majority of the US, don’t face the repercussions of climate change, or if they do, they are minor compared to India or parts of Africa.
I am references poor, I live and die by my views, sorry, I know not the most reliable person, but the whole hype about 2030 AGI feels just like hype, everything is just around the corner, in the early 2000s I remember every year we would have these huge corporations promising the next break-trough, everything within the reach of our fingertips but slipping away at the last moment. Cryonics, Theranos and the like, so much potential, so much “results”, but in the end it was all a sham. My fear is that we now have people cooking the books, cooking the reports, and overpromising the dreams, while the reality is bleak. My main fear it that we will invest too much and get little to no in returns, while the other causes pile up and cause real-life issues.
I am not against AI or AGI research, I just fear that diving deep just into that cause is a huge gamble, I am cheering you all on to succeed, but I am afraid of the scenario where it all goes down like a house of cards, and we have lost so much funding and resources working on this synthethic brain, but noting to show for.
I usually just sit down and type my thoughts directly, I don’t work trough my thinking, so my first point is somewhat moot at parts, but I feel like I addressed some things that I wanted to address now. Thanks for your feedback and comment, I enjoyed the difference in views.
Thanks for writing this out. I think it’s important to keep in mind that there’s a significant difference in lived experience between the median human being on this planet and the median EA.
As far as hype: AI might or might not be hype. The question is whether we can accept the risk of it being not-hype. Even if development plateaued in the near future, it is already powerful enough to have significant effects on (e.g.) world economies. I’d submit that we especially need non-Western perspectives in thinking about how AI will affect the lives of people in developing countries (cf. the discussion here). In my view, there’s a tendency in EA/EA-adjacent circles to assume technological progress will lift all boats, rather than considering that people have used technological advances throughout history to support their positions of power and privilege.
To be fair to 80K here, it is seeking to figure out where the people it advises can have the most career impact on the margin. That’s not necessarily the same question as what areas are most important on the abstract. For example, someone could believe that climate change is the most important problem facing humanity right now, but nevertheless believe things like progress on climate change is bottlenecked by something other than new talent (e.g., money) and/or there is enough recruitment for people to work on climate change to fill the field’s capacity with excellent candidates without any work on 80K’s part. So I’d encourage you to consider refining your critique to also address how likely devoting the additional resource(s) in question to your preferred cause area(s) is to make a difference.
AGI will effect everyone on the planet , whether they believe the “hype” or not (kill them all most likely, once recursive self-improvement kicks in, before 2030 at this rate).
Thecompendium.ai is a good reference. Please read it. Feel free to ask any questions you have about it. (Also, cryonics isn’t a sham, it’s still alive and well; just not many adopters still, but that’s another topic.)
We shouldn’t be working on making the synthetic brain, we should be working on stoppingfurtherdevelopment!
This is why organizations need diversity, because this is exactly what happens when you have an organization that has the western developed mindset. The pallete of ideas and viewpoints is clearly lacking.
The AGI delivery by 2030 will fail, we barely have resources to properly run these current LLM models, and the AGI will surely be much more complex, if possible even. Congratulations you have now jumped on a trend-wagon that will take us nowhere, while forgetting about actual causes that are relevant right now.
It’s a shame that the same people that built you up, shaped you as an organization and gave you credibility, are now put on the pier, left behind.
Hey NobodyInteresting,
I see you’re getting downvoted and disagreed with without any direct interaction, so I’ll bite.
I think this is a bit combative in how it’s delivered. I suspect a number of people actually agree with the ‘echo chamber’ problem—it seems like a number of the other comments more or less say something like this in how they disagree with 80k’s conclusions.
However, you might need to elaborate on the “western” aspect. What do you think is the shortcoming you’re identifying here? For instance, I expect a western lens is the right one for this problem because most of the frontier labs are western.
I think this is a controversial call—some agree, some disagree. But I don’t think anyone can say this with confidence: for instance, the frontier lab leaders all seem to be saying this, in a way that doesn’t seem to be explained by “hype” or “pandering to investors”. This would be a good one to elaborate on your views / provide a link to the strongest evidence you buy into.
I think this is just not true? Maybe we don’t have the resources to run them personally, but plenty of companies seem to be running their latest models for (limited) public access just fine. Are you addressing these inequality considerations or something else?
I think I’ve mostly responded to this, but I’d like to connect it with the “left behind” point at the end:
I think it’s good, and highly in keeping with EA principles, to (a) respond to information as it comes in and (b) think and work on the margin. This is probably too big to tackle in this comment—maybe I’ll write something up later—but
changing advice on the margin (for the next person) isn’t by necessity the same as ‘leaving people behind’
I would be interested in why or whether people who agree with the advice / 80k’s judgment feel they should stay where they are, instead of also pivoting?
EA as a whole acts globally, but the majority of big EA players are western based and I believe there is a big gap between what reality is to the average human on the planet and the average EA participant. Yes in terms of AGI, it’s a western game mostly, but 80,000 hours wasn’t an AI-focused organization it was an EA-focused organization. I think this is somewhat like putting all the eggs in one basket, but it’s one Fabergé egg that promises a lot of potential, but could be rotten on the inside. In my opinion AGI is somewhere around the 3-rd place for “biggest global issues” right now. First one for me is Climate Change, and the second one is Food production systems security. Again this aspect is inherently not-western because Western Europe and the majority of the US, don’t face the repercussions of climate change, or if they do, they are minor compared to India or parts of Africa.
I am references poor, I live and die by my views, sorry, I know not the most reliable person, but the whole hype about 2030 AGI feels just like hype, everything is just around the corner, in the early 2000s I remember every year we would have these huge corporations promising the next break-trough, everything within the reach of our fingertips but slipping away at the last moment. Cryonics, Theranos and the like, so much potential, so much “results”, but in the end it was all a sham. My fear is that we now have people cooking the books, cooking the reports, and overpromising the dreams, while the reality is bleak. My main fear it that we will invest too much and get little to no in returns, while the other causes pile up and cause real-life issues.
I am not against AI or AGI research, I just fear that diving deep just into that cause is a huge gamble, I am cheering you all on to succeed, but I am afraid of the scenario where it all goes down like a house of cards, and we have lost so much funding and resources working on this synthethic brain, but noting to show for.
I usually just sit down and type my thoughts directly, I don’t work trough my thinking, so my first point is somewhat moot at parts, but I feel like I addressed some things that I wanted to address now. Thanks for your feedback and comment, I enjoyed the difference in views.
Thanks for writing this out. I think it’s important to keep in mind that there’s a significant difference in lived experience between the median human being on this planet and the median EA.
As far as hype: AI might or might not be hype. The question is whether we can accept the risk of it being not-hype. Even if development plateaued in the near future, it is already powerful enough to have significant effects on (e.g.) world economies. I’d submit that we especially need non-Western perspectives in thinking about how AI will affect the lives of people in developing countries (cf. the discussion here). In my view, there’s a tendency in EA/EA-adjacent circles to assume technological progress will lift all boats, rather than considering that people have used technological advances throughout history to support their positions of power and privilege.
To be fair to 80K here, it is seeking to figure out where the people it advises can have the most career impact on the margin. That’s not necessarily the same question as what areas are most important on the abstract. For example, someone could believe that climate change is the most important problem facing humanity right now, but nevertheless believe things like progress on climate change is bottlenecked by something other than new talent (e.g., money) and/or there is enough recruitment for people to work on climate change to fill the field’s capacity with excellent candidates without any work on 80K’s part. So I’d encourage you to consider refining your critique to also address how likely devoting the additional resource(s) in question to your preferred cause area(s) is to make a difference.
AGI will effect everyone on the planet , whether they believe the “hype” or not (kill them all most likely, once recursive self-improvement kicks in, before 2030 at this rate).
Thecompendium.ai is a good reference. Please read it. Feel free to ask any questions you have about it. (Also, cryonics isn’t a sham, it’s still alive and well; just not many adopters still, but that’s another topic.)
We shouldn’t be working on making the synthetic brain, we should be working on stopping further development!