EA as a whole acts globally, but the majority of big EA players are western based and I believe there is a big gap between what reality is to the average human on the planet and the average EA participant. Yes in terms of AGI, it’s a western game mostly, but 80,000 hours wasn’t an AI-focused organization it was an EA-focused organization. I think this is somewhat like putting all the eggs in one basket, but it’s one Fabergé egg that promises a lot of potential, but could be rotten on the inside. In my opinion AGI is somewhere around the 3-rd place for “biggest global issues” right now. First one for me is Climate Change, and the second one is Food production systems security. Again this aspect is inherently not-western because Western Europe and the majority of the US, don’t face the repercussions of climate change, or if they do, they are minor compared to India or parts of Africa.
I am references poor, I live and die by my views, sorry, I know not the most reliable person, but the whole hype about 2030 AGI feels just like hype, everything is just around the corner, in the early 2000s I remember every year we would have these huge corporations promising the next break-trough, everything within the reach of our fingertips but slipping away at the last moment. Cryonics, Theranos and the like, so much potential, so much “results”, but in the end it was all a sham. My fear is that we now have people cooking the books, cooking the reports, and overpromising the dreams, while the reality is bleak. My main fear it that we will invest too much and get little to no in returns, while the other causes pile up and cause real-life issues.
I am not against AI or AGI research, I just fear that diving deep just into that cause is a huge gamble, I am cheering you all on to succeed, but I am afraid of the scenario where it all goes down like a house of cards, and we have lost so much funding and resources working on this synthethic brain, but noting to show for.
I usually just sit down and type my thoughts directly, I don’t work trough my thinking, so my first point is somewhat moot at parts, but I feel like I addressed some things that I wanted to address now. Thanks for your feedback and comment, I enjoyed the difference in views.
Thanks for writing this out. I think it’s important to keep in mind that there’s a significant difference in lived experience between the median human being on this planet and the median EA.
As far as hype: AI might or might not be hype. The question is whether we can accept the risk of it being not-hype. Even if development plateaued in the near future, it is already powerful enough to have significant effects on (e.g.) world economies. I’d submit that we especially need non-Western perspectives in thinking about how AI will affect the lives of people in developing countries (cf. the discussion here). In my view, there’s a tendency in EA/EA-adjacent circles to assume technological progress will lift all boats, rather than considering that people have used technological advances throughout history to support their positions of power and privilege.
To be fair to 80K here, it is seeking to figure out where the people it advises can have the most career impact on the margin. That’s not necessarily the same question as what areas are most important on the abstract. For example, someone could believe that climate change is the most important problem facing humanity right now, but nevertheless believe things like progress on climate change is bottlenecked by something other than new talent (e.g., money) and/or there is enough recruitment for people to work on climate change to fill the field’s capacity with excellent candidates without any work on 80K’s part. So I’d encourage you to consider refining your critique to also address how likely devoting the additional resource(s) in question to your preferred cause area(s) is to make a difference.
AGI will effect everyone on the planet , whether they believe the “hype” or not (kill them all most likely, once recursive self-improvement kicks in, before 2030 at this rate).
Thecompendium.ai is a good reference. Please read it. Feel free to ask any questions you have about it. (Also, cryonics isn’t a sham, it’s still alive and well; just not many adopters still, but that’s another topic.)
We shouldn’t be working on making the synthetic brain, we should be working on stoppingfurtherdevelopment!
EA as a whole acts globally, but the majority of big EA players are western based and I believe there is a big gap between what reality is to the average human on the planet and the average EA participant. Yes in terms of AGI, it’s a western game mostly, but 80,000 hours wasn’t an AI-focused organization it was an EA-focused organization. I think this is somewhat like putting all the eggs in one basket, but it’s one Fabergé egg that promises a lot of potential, but could be rotten on the inside. In my opinion AGI is somewhere around the 3-rd place for “biggest global issues” right now. First one for me is Climate Change, and the second one is Food production systems security. Again this aspect is inherently not-western because Western Europe and the majority of the US, don’t face the repercussions of climate change, or if they do, they are minor compared to India or parts of Africa.
I am references poor, I live and die by my views, sorry, I know not the most reliable person, but the whole hype about 2030 AGI feels just like hype, everything is just around the corner, in the early 2000s I remember every year we would have these huge corporations promising the next break-trough, everything within the reach of our fingertips but slipping away at the last moment. Cryonics, Theranos and the like, so much potential, so much “results”, but in the end it was all a sham. My fear is that we now have people cooking the books, cooking the reports, and overpromising the dreams, while the reality is bleak. My main fear it that we will invest too much and get little to no in returns, while the other causes pile up and cause real-life issues.
I am not against AI or AGI research, I just fear that diving deep just into that cause is a huge gamble, I am cheering you all on to succeed, but I am afraid of the scenario where it all goes down like a house of cards, and we have lost so much funding and resources working on this synthethic brain, but noting to show for.
I usually just sit down and type my thoughts directly, I don’t work trough my thinking, so my first point is somewhat moot at parts, but I feel like I addressed some things that I wanted to address now. Thanks for your feedback and comment, I enjoyed the difference in views.
Thanks for writing this out. I think it’s important to keep in mind that there’s a significant difference in lived experience between the median human being on this planet and the median EA.
As far as hype: AI might or might not be hype. The question is whether we can accept the risk of it being not-hype. Even if development plateaued in the near future, it is already powerful enough to have significant effects on (e.g.) world economies. I’d submit that we especially need non-Western perspectives in thinking about how AI will affect the lives of people in developing countries (cf. the discussion here). In my view, there’s a tendency in EA/EA-adjacent circles to assume technological progress will lift all boats, rather than considering that people have used technological advances throughout history to support their positions of power and privilege.
To be fair to 80K here, it is seeking to figure out where the people it advises can have the most career impact on the margin. That’s not necessarily the same question as what areas are most important on the abstract. For example, someone could believe that climate change is the most important problem facing humanity right now, but nevertheless believe things like progress on climate change is bottlenecked by something other than new talent (e.g., money) and/or there is enough recruitment for people to work on climate change to fill the field’s capacity with excellent candidates without any work on 80K’s part. So I’d encourage you to consider refining your critique to also address how likely devoting the additional resource(s) in question to your preferred cause area(s) is to make a difference.
AGI will effect everyone on the planet , whether they believe the “hype” or not (kill them all most likely, once recursive self-improvement kicks in, before 2030 at this rate).
Thecompendium.ai is a good reference. Please read it. Feel free to ask any questions you have about it. (Also, cryonics isn’t a sham, it’s still alive and well; just not many adopters still, but that’s another topic.)
We shouldn’t be working on making the synthetic brain, we should be working on stopping further development!