I am an Economist working at the Financial Risk Department of Banco de España (Spanish Central Bank). I was born in 1977 and I have recently finished my PhD Thesis (See ORCID webpage: https://orcid.org/0000-0002-1623-0957 ).
Arturo Macias
I repeat here my previous post supporting Bostrom, and his apology:
He is a simply one of the best philosophers of our time, and the main founder or longtermism. The comment in the original mail list was a modest misstep by a person without special responsibilities at the time, and his apology in 2023 has been entirely appropriate.
The racial IQ gap is an observational fact, and he is not an expert on its determinants, so he shall not take a position on a scientific issue that lays beyond his academic authority.
99% of meat comes from factory farms? A good percent of cows (and sheep, that in Europe, AU and NZ still have some relevance) are pasture fed, as anybody driving a car can easily check by herself.
https://extension.psu.edu/grass-fed-beef-production
“Rather than debate advantages and disadvantages of the grain versus grass-fed systems, the take-home here is that all beef cattle, whether farmers choose to raise them as grass-fed or grain-fed animals, spend at least two-thirds of their lifetime in a pasture setting. Therefore, all beef may be considered “grass-fed” for the majority of its life. Thus, beef production in the United States has been, and continues to be, a forage-based industry. The differentiation in what makes cattle grass-fed then, generally occurs towards the end of life and will be discussed in more detail.”
I was not aware of the enormous weigth of aquaculture on final fish production. I was thinking it was around 10%, but it is close to one half.
https://ourworldindata.org/rise-of-aquaculture
Onmizoid is rigth, and I have retracted my comment.
When I began writing in the EA forum I was often told that I was unclear, but this is because there is “rationalistic style” that in my view is not conductive to understanding, but to the signaling of community belonging.
Well, I am surprised of the amount of downvoting and the lack of comments. I would have been glad of engaging with opposite arguments.
Hello to all,
Have you contacted the Integrated Information Theory group about this project? In my (dualistic naturalist) viewpoint their work is the most advanced in the area of consciece detection.
https://www.amazon.com/Sizing-Up-Consciousness-Objective-Experience/dp/0198728441
Of course, conscience is absolutely noumenal and the best part of their work is focused in the case where self reported conscience experience is possible [humans], but they tried to extrapolate into mathematical models of application to any material system.
In my view you underestimate the degree of intentionality and coordination of the offensive against EA.
Latin American and African groups are extremely important for EA, so congratulations for the initiative!.
In a previous post (https://forum.effectivealtruism.org/posts/4viLtxnwzMawqdPum/time-consistency-for-the-ea-community-projects-that-bridge) I have argued that the development of non-propietary technologies to improve productivity (and specially agricultural productivity) in Africa shall be considered a main EA priority.
There have been some movements like the African makers (http://africanmakersmedia.com/) or open source ecology (https://www.opensourceecology.org/) that have tried to build an alternative technological open source techno ecology. Do you know about this kind of networks, and kind of assistance can be offered to them?
While I don’t have a very good opinion on AI risk research, this is the last necessary thing.
There is radical uncertainty about the technological paths opened by AI, wheather those paths end in AGI, and what kind of preferences would AGI have at the beguining and how they would evolve. Any mathematical modelling at this stage would be pure “pretense of knowledge”. An exercise even more sterile than the numbers war about if there is 1%, a 10% or a 99% probability of AI doom.
It is time to explore the technology, and to make researchers sensitive to risks. In fact, I think that AI safety still does not exist as an independent knowledge field, and mathematization of (almost) nothing is even worse than nothing.
I have written two recent posts describing my position. In the first I argued that nuclear war plus our primitive social systems, imply we live in an age of acute existential risk, and the substitution of our flawed governance by AI based government is our chance of survival.
In the second one, I argue that given the kind of specialized AI we are training so far, existential risk from AI is still negligible, and regulation would be premature.
You can comment on the posts themselves, or you can comment both posts here.
Do you have any suggestion for non lethal aid to Ukraine? For example, organizations providing electrical generators, heaters, anything that can help the non military resistance of Ukraine’s population?
To some extent this is more or less a description of the brigthest side of the American Dream. It would be interesting to know what projects are you interested in.
It increases large migrations, political unstability, drought… and that create geopolitical unstability, and the probability of conventional war, revolution, etc and those events can easily trigger a nucelar as long a nuclear power is involved. Do the math: around 1⁄3 of people lives in nuclear armed nations.
Nuclear war turn historical risk into something that can have geological time consequences. I don’t believe there is really any other existential risk (on the timescale of decades) other than nuclear war. There is only one “precipice” but many ways to fall there.
Dear all,
My name is Arturo Macias. I am a 45 years old economist working at Banco de España, the Spanish Central Bank. I have recently finished my Ph.D (see my ORCID account for my published papers: https://orcid.org/0000-0002-1623-0957) and consequently I have recovered a substantial amount of free time.
While I have a great deal of simpathy to the whole Effective Altruism movement my main interest is related to intitutional desing and economic estabilization. In my view among the main existential risk bottlenecks for this Dangerous Century, a critical one is institutional stagnation. E.O Wilson famously said: “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology”. Regarding the Paleolithic emotions, I can not advance any solution (this is for geneticists), and regarding the godlike tecnologhy, after Aug 6th 1945 nobody can.
Regarding the medieval institutions I think I can make some modest contributions and thats is why I am here.
Kind Regards,
Arturo
Thank you for your post, and my best wishes to Israel in this challenging times. Israel disproportionally attracts both the hatred and the admiration of other countries, and perhaps those who hate are more vocal. So while I cannot add much, I want to wish you all the best.
Dear Mr. Wagner,
Do you have any canonical referece for AI aligment research? I have read Eliezer Yudkowsky FAQ and I have been surprised of how little technical details are commented. His arguments are very much “we are building alien squids and they will eat us all”. But they are not squids, and we have not trained them to prey on mammals, but to navigate across symbols. The IAs we are training are not as alien as giant a squid, but far more: they are not even trained for self-preservation.
MR suggests that there is not peer reviewed literature on AI risk:
“The only peer-reviewed paper making the case for AI risk that I know of is: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064. Though note that my paper (the second you linked) is currently under review at a top ML conference.”
But perhaps I can read something comprehensive (a pdf, if possible), and not depend on navigating posts, FAQs and similar stuff. Currently my understanding of AI risk is based in technical knowledge about Reinforcement Learning for games and multiagent systems. I have no knowledge nor intuition on other kind of systems, and I want to engage with the “state of the art” (in compact format) before I make a post focused on the AI alignement side.
I disagree! The European institution harmonize (often this means “take the less constraining requements”) european regulation. Europe is perhaps too regulatory, the EU is not a force for more regulation, only for uniform regulation.
Never forget that EU directives are approved by a supermajority of the European Council (=the meeting of European governments), plus the EU Parliament (that almost allways accept by majority anything that the Council approves by supermajority).
“Maxwell’s theory of electromagnetism”
Is this a test of attention? Or there is something I miss?
I completely agree with this position, but my take is different: Nuclear war risk is high all the time, and all geopolitical and climate risks can increase it. It is perhaps not existential for the species, but certainly it is for cilivization. Given this, for me it is the top risk, and to some extent, all efforts for progress, political stabilization, climate risk mitigation are modestly important in themselves, and massively important to affect nuclear war risk.
Now, the problem with AI risk is that our understanding of why and how IA works is limited. If my understaing is correct, we have constructed Alpha Zero mainly by growing it, not by designing it. We really dont understand “how it works”. The “black box risk” is huge, and until we have a better theoretical understanding of AI , all efforts will be mainly useless. The “information bottleneck principle” tried it, but interest on it faded. I think other generalizing principles have not been proposed, but I am a user, not a developer, so I could be wrong.
If I understand properly EA is to some extent a platform where many different perspectives can coexist. I find Shrimp welfare or deep longtermism totally pointless, but cause choice is a main principle of the movement.
Your effort and money can be directed to those issues that you really care about. Democracy is good, choice is even better!