Very nice post! I’m not sure if you have looked into this but market aside, given that people in EA believe in claims about AI risk and short timelines, are charities in EA spending money in proportion to the seriousness the EA community seems to take short AI timelines and AI x-risk? For example you cited some reports from Open Philanthropy like Bio Anchors where you extracted some of the probabilities used in your calculation. Do you think Open Phil’s spending is in line with the expected timelines suggested by Bio Anchors?
supesanon
The justifications made in that post are weak in proportion to the claims made IMO but I’m just a simple human with very limited knowledge and reasoning capability so I am most likely wrong in more ways than I could ever fully comprehend. You seem like a more capable human that is able to think about these type of claims a lot more clearly and understand the arguments much better. Given that argumentation is the principle determinant of how people in industry make products and as a by product the primary determinant of technological development for something like AI, I have full confidence that these type of inferences you allude to will have very strong predictive value as to how the future unfolds when it comes to AI deployment. I hope you and your fellow believers are able to do a lot of useful things about existential risk from AI based on your accurate and infallible inferences and save humanity. If it doesn’t work out at least you will have tried your best! Good luck!
To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.
Sorry but I’m not going to do your homework for you. If you want to find arguments for or against AI safety go look for them yourself. If want to actually find out what leading AI researchers think you can find that as well. I have no special insight over the many people who have expertise in the field of AI so I am not the best source and my conclusions could be wrong. I’m still learning more all the time as I increase my expertise in AI. If you have done your homework and have come to the conclusion that AI safety as a field is warranted then well and good. If you are looking for someone who will argue with you in order to convince you one way or another then I hope someone is willing to do that for you either way good luck!
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence. I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives? Why not just go look for differing perspectives yourself? This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs (I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU). I witnessed this lack of curiosity in my own cohort that completed AGISF. We had more questions than answers at the end of the course and never really settled anything during our meetings other than minor definitions here and there but despite that, some of the folks in my cohort went on to work or try work on AI safety and solicit funding without either learning more about AI itself(some of them didn’t have much of a technical background) or trying to clarify their confusion and understanding of the arguments. I also know another fellow from the same run of AGISF who got funding as an AI safety researcher when they knew so little about how AI actually works. They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person. This is why the conflict of interest at the source of funding pushing a certain belief is so pernicious because it really can affect beliefs downstream.
Yup very sure. AGI Safety Fundamentals by Cambridge.
I took that course and gave EA a benefit of the doubt. I was exposed to arguments about AI safety before I knew much about AI and it was very confusing stuff and there is a lot that didn’t add up but I still gave the EA take the benefit of the doubt since I didn’t know much about AI and thought that there was something that I just didn’t understand. I then spent a lot of time actually learning about AI and trying to understand what experts in the field think about what AI can actually do and what lines of research they are pursuing. Suffice it to say that the material on AGI safety didn’t hold up well after this process. The AI x-risk concerns seem very quasi religious. The story is that man will create an omnipresent, omniscient and omnipotent being. Such beings are known as God in religious contexts. More moderate claims have that a being or a multiplicity of them that possess at least one of these characteristics will be created which is more akin to gods in polytheistic religions. This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal. It’s very similar to religious creation stories but with the role of creator reversed but the outcome is the same, Armageddon. Given that the current prophecy seems to indicate that the apocalypse will come by 2030 it seems like there is opportunity for a research study to be done on EA similar to that done of the Seekers. Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. There will also be a selection bias towards people who are prone to this kind of ideological beliefs similar to how some people are just prone to conspiracy theories like QAnon albeit that AI x-risk is a lot more sophisticated. At least the people who believe in mainstream religions are upfront that their beliefs are based on faith. The AI x-risk devotees also base their beliefs on faith but its couched in incomprehensible rationality newspeak, philosophy, absurd extrapolations and theoretical mathematical abstractions that cannot be realized in practical physical systems to give the illusion that it’s more than that.
I think they would have to believe there is a risk but they are actually just trying to figure out how to make headway on basic issues. The point of my comment was not to argue about AI risk since I think that is a waste of time as those who believe in it seem to hold it more like an ideological/religious belief and I don’t think there is any amount of argumentation or evidence that can convince them(there is also a lot of material online where the top researchers are interviewed and talk about some of these issues for anyone actually interested about what the state of AI is outside the EA bubble). My intention was just to name that there is a conflict of interest in this particular domain that is having a lot of influence in the community and I doubt there will be much done about it.
There is a difference. For ML engineers they actually have to follow up their claims by making products that actually work and earn revenue or successfully convince a VC to keep funding their ventures. The source of funding and the ones appealing for the funding have different interests. In this regard ML engineers have more of an incentive to try upsell the capabilities of their products than downplay them. It’s still possible for someone to burn their money funding something that won’t pan out and this is the risk investors have to make (I don’t know of any top VCs as bullish on AI capabilities on as aggressive timelines as EA folks). In the case of AI safety some of the folks who are in charge of the funding are the ones who are also the loudest advocates for the cause as well as some of the leading researchers. The source of funding and the ones utilizing the funding are comingled in a way that would lead to a conflict of interest that seems quite more problematic than I’ve noticed in other cause areas. But if such serious conflicts do exist, then those too are a problem and not an excuse to ignore conflicts of interest.
I think conflict of interest is what leads to existential risk from AI rising up to being the most important issue in EA even though it’s based on dubious reasoning and extrapolations that many people at the forefront of AI development don’t think make sense from a capabilities perspective. It’s been sufficient for senior people for EA to take it seriously and given these same folks control resource allocation it ends up driving a lot of what the community thinks. This bias clearly reveals itself when talking to some EAs terrified of AI but don’t know anything about how it works nor do they have any idea of what actual AI researchers think as well as the obstacles they are trying to overcome. It seems like 95% of the people in EA who are terrified about existential risk from AI just defer to other people who speak about things they don’t really comprehend but because they have control of the money and status in the community they assume they can be trusted. How can the folks who are funding AI Safety research be considered objective when they are the same folks who are considered as the producers of authoritative content on AI safety as well as have familial or intimate relationships with top AI researchers ? I don’t question the sincerity of these folks in their beliefs but given the nature and structure of the situation I cannot trust that EA can come to the correct conclusions about this specific topic. I also think this is a mess that cannot be untangled and will have to run it’s course until EA doesn’t have money to burn.
Great article but I think the baseline for comparisons should be based on how long a country has existed as an independent cohesive entity. Malawi has only been an independent state for 58 years so maybe the comparison should be done based on the level developed countries were at when they were at 58 years in modern times. I think if this was done Malawi may look a lot better. I’m skeptical that the process of nation building and developing national cohesion can be shortcut. I also think without such national cohesion it’s difficult for broad based economic growth to happen. It seems a lot of comparisons are made between countries in sub-Saharan Africa and countries that have existed in some form or another for centuries or where the populations have remained homogenous enough that there is not significant internal friction limiting economic activity. These 2 things are not true for most sub-Saharan countries where there is a lot of internal division. Many people if not most in these countries have more allegiance to their tribes than the state and this really affects how politics and governments are run. It also affects how resources are distributed and levels of trust in society. I’d guess that in my own country it will take maybe 2 to 3 generations of intermarriage and other societal processes for tribal lines to get sufficiently blurred such that the nation state becomes the preeminent identifying group for most people. I don’t find this depressing as it seems to be a process states have to go through. It would be absurd(in my view) to expect that a young child grow into a matured adult within say half the time it takes other children, similarly I think its kind of absurd to expect young countries that didn’t exist a century ago to become fledgling utopias within say 50 years.
There should be a clear separation between funding of cause areas and people working on specific cause areas to avoid conflicts of interest that inevitably affects people’s judgements. For contentious cause areas it may be better to commission a broad group of expert stakeholders with a diversity of knowledge, background and opinions to assess the validity of a cause area based on EA criteria. There are currently some cause areas that seem very ideologically driven by some highly regarded folks in EA. These folks seem to both generate the material that is the main source material that supports the cause area as well as are the ones steering funding. This asymmetry in power and funding undoubtedly influences the cause areas that end up becoming and staying prominent in EA.
The public debt to GDP ratio in Japan is over 260% and they still haven’t defaulted (it somewhat boggles my mind that they can sustain such high debt levels even though it seems that there are reasonable explanations for it). There are ostensibly some differences between the Japanese and American contexts but nonetheless it seems possible for developed countries to sustain high levels of public debt for a considerable amount of time. I think how long of a time is still an open question. I’d expect to see the Japanese situation unravel before the American one and maybe that might give an indication of how sustainable extreme high levels of debt are if there is ever such an unraveling.
As the government debt reaches maturity, it needs to roll over and be repriced at current interest rates of 4% (but let’s take an average of 3%) - think of this as your fixed-term interest rates on your mortgage running out so you need to renegotiate a new rate with the bank. If we reprice the current $31.3T of debt at 3% interest rates, the interest expense would increase by ~$500 billion per year to almost $1 Trillion per year—That will overtake military spending as the biggest single line item for spending.
Not all the debt matures at the same time, does it? If not then maybe only the portion that matures gets repriced?
I’m waiting to read your take, especially since the conflict of interest issue has come up with FTX and you seem to have some conflicts of interest especially when it comes to AI safety funding . I’m curious how you have managed to stay unbiased.
Personally I’d rather be kicked when I’m down. Better to deal with all the pain at once. Maybe there can be a forum feature that can allow a person to tune the percentage of critical posts they see based on sentiment analysis and those who can’t handle all the vitriol at once can take it in little bits at their own convenience.
You’re right. I guess people noticed but there was no meaningful action taken. Oh well I guess until next time.
On the one hand, publicly calling people out could raise red flags about potential bad actors in the EA community early and in a transparent way. On the other hand, it could lead to witch hunts, false positives, etc. You can also imagine that whatever bad actors are in the community would start retaliating and listing good actors here who are their enemies, so it could cause a lot of confusion and infighting. (Think about the way Donald Trump uses public discourse.)
I don’t think these concerns hold up. EAs are highly engaged and can distinguish between legitimate and bullshit claims.
I think private discussions about this with others you trust are probably a good idea, to help guide personal decisions about who else to trust, work for, donate to etc. And this issue might make more decentralization actionable.
This would work if not for the power imbalance that arise between bad actors that are senior members in the community and normal community members.
There may be other systemic ways to manage the risk too without making publicly outing suspected bad actors a regular thing. I’m also still open to the idea that public bad actor callouts might be a good idea, but I think it’s a really delicate thing and I’d like to see a convincing argument/plan for how to make the discussion be productive before I would support it
EA considers very seriously the issue of morality and ethics. Is it that much to ask that the leaders of a community that takes matters of morality and ethics so seriously to have its leaders held up to high moral standards? Unless this morality stuff is just for fun and EAs don’t actually believe in the moral and ethical systems they claim to believe in. Reminds me of this paper
Avoiding addressing this issue will only lead to further pain down the line. I think it is very strange that SBF had so many inconsistencies between his claimed moral positions and his behavior and nobody noticed it. It suggests that maybe his character didn’t seem that strange compared to other senior EA members.
I’m impressed by the belief you have in the AGI community to have much more insight to the future of the rest of humanity than the rest of humanity! You and the AGI community are either very insightful or very delusional and I’m excited to find out which one as time progresses and if I’m lucky to live long enough to see how these short time forecasts play out. I wonder if it follows that you also believe that the topmost percentile of thinkers or prognosticators out of all the ~8 billion humans currently alive are in the AGI community? I also admire that despite the fact that you don’t agree with most of humanity you are still willing to work on preventing AI X-risk and save us! That’s some true altruism right there!