You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9
Charles He
Beautiful
I think this comment is confusing and unseemly, as it might be supporting the negative content in the parent comment. This is a thread for newcomers, making this concern greater.
I want to downvote this comment to invisibility or report this to a moderator, but the comment is made by a moderator, and he has two cats, presumably trained to attack his enemies.
Yes, exactly.
Specific use cases I’m thinking about:
Comments I want to reply to, but can’t immediately type out. It would be nice to batch them up to reply to later, and saving them would help.
There are also good informative comments, like someone recommending some authors here, that I want to save and use for personal value:
I can save these in another system I use, but it’s heavier weight, and many people wouldn’t have another system to save these in.
One of the takeaways that Allen leaves his readers with is that “this policy signals that the Biden administration believes the hype about the transformative potential of AI and its national security implications is real.” That sentiment probably feels familiar to many readers of this forum.
To be clear, it’s good that this sentiment is possible, it is good that you mention it and consider it, and it is good that Allen mentions it and may believe it.
If Allen or you are trying to suggest that this action is even partially motivated by concern about “transformative AI” in the Holden sense (much less full-on “FOOM” sense), this seems very unlikely and probably misleading.
Approximately everyone believes “AI is the future” in some sense. For example, we easily can think of dozens of private and public projects that sophisticated management at top companies pushed for, that are “AI” or “ML” (that often turn out to be boondoggles). E.g., Zillow buying and flipping houses with algorithms.
These were claimed to be “transformative”, but this is only in a limited business sense.
This is probably closer to the meaning of “transformative” being used.
Streets in small parts of San Francisco, near the downtown, are very, dirty and occupied by unsheltered people. The environment is very surprising to some people who have not been to North America before.
In these neighborhoods, there are nice hotels/hostels that are cheaply priced and near BART or public transportation.
In the past, some EAs, not from North America, have not been aware of the above. Because SF is very expensive, and they wanted to be frugal, some have booked cheaper accommodations and then felt very uncomfortable, especially single people or women at night. This probably affected their experience of SF and the events they were going to.
Suggestions:
Check out reviews for the accommodations on line. If people mention “The hotel is nice but is in [neighborhood X]”, it’s worth Googling the mentioned neighborhood to see what is going on.
Ask friends who live in SF.
Note:
Note many Americans/locals don’t like to draw attention to the situation, because it’s chronic and ongoing, it’s intractable and complicated, the discussion involves cultural viewpoints many disagree on, and Americans often feel guilty/conflicted by inequality. As a result, many Americans or locals won’t talk about the issues much and when they do, they will talk about it indirectly.
Homelessness is very visible, but the causal relationship between the wealth in SF and homelessness is unclear. San Francisco literally spends over $50,000 per homeless person.
Homelessness is visible but touches on a much deeper problem of land constraints. These constraints of land (or housing laws/NIMBY) is a historically bizarre situation, I think there’s a small subfield in economics which argues tech’s establishment in the Bay Area, is limiting GDP/innovation by many billions of dollars. X-risk people have said this is harming X-risk.
Hello Dr. Greenough,
This seems thoughtful, thank you for writing this.You responded to me, but I’m one of the least influential or important people to respond to.
I’ve written some thoughts, trying to be succinct and helpful:
Around sensitive issues like money or duty, EAs, like other many other conscientious people, prefer to be direct and logically exhaustive and avoid emotion—I’m not sure all the communications in this post was successful for reasons related to this. I’m not sure it makes sense to try to write something to get a lot of funding quickly.
The people involved in OR-6 genuinely want a good candidate for the district, and good trust and coordination on issues like pandemic preparedness.
I think successful involvement from EAs, requires a high level of trust and communication (and not fully achievable on an internet forum).
With a somewhat uncertain chance of success, if you choose to invest time, I suggest you maybe reply or private message this person or this person, who might be able to communicate offline with you?
Ideally, you would already be in contact with EAs who can give you further advice.
Probably one of the most important person to coordinate with and speak to is Carrick Flynn, who I presume you are in communication with or have tried to speak to.
This is somewhat of a separate idea, and this would be a major time commitment, but I think your first comment was overwhelmingly liked. If you or your staff, continued to write in a way that tried to mainly educate and inform EAs, I think that would be welcome and potentially drive support to you, in a longer timeframe.
Again, super low status! Bottom tier!
Datapoints:
Someone I know has had a few EA meetings. For confirmed meetings, this person’s experiences suggests EAs are unusually scrupulous and conscientious about attending meetings. They have had only a small number of cases of missed meetings, and it seemed unintentional because the people who missed rescheduled and they gave lots of effort in meetings.
EAs very rarely make promises or plan things that don’t happen, when they say these things in 1on1s outside of a conference.
However, in conferences or to get togethers, it’s common to make plans or discuss things where there isn’t follow up. This actually isn’t because of a culture, but I think because people get genuinely excited and overpromise.
A high level of conscientiousness seems especially true for more established EAs. Senior EAs have say, pointed out a minor typo in an email (like a broken link), which no one does, basically.
In certain situations, the norm of silence or not following up seems efficient or even welcome. It’s extremely awkward for certain people to actively give negative signals in specific situations.
For example, a grant maker who thinks your introductions are unpromising isn’t going to say “Hey, this person you introduced me to seems unpromising and you should stop” or ”Hey Charles, I don’t know you but you’ve sent me 15 PMs on the EA Forum last week, please stop —Linch”.
They’ll just not reply, and combined with the norm about responding, this seems pretty efficient.
I also wanted to hop on this thread and add some datapoints, which is useful because I’m the lowest status person in the comments. Super low status!
As the OP said, and other people noted, non-EA culture in North America, and especially the West Coast is flaky. People often suggest making plans or following up, that don’t pan out. Making plans happens as a sort of verbal decoration in casual run-ins. However, not showing up is much rarer.
Note I think that this NA culture also includes people not responding to follow-up emails or messages, even if they seemed very interested before. One reason I’m pointing this out is that there is a flip side to this norm—it makes it more natural to follow up after not getting a reply, under some norms (say, wait a decent 4 weeks; make sure the follow-up doesn’t mention any ignored emails, but instead suggest a promising update).
I literally read your post for over ~30 minutes to try to figure out what is going on. I don’t think what I wrote above is relevant/the issue anymore.
Basically, I think what you did was write a narration to yourself, with things that are individually basically true, but that no one claims is important. You also slip in claims like “human cognition must resemble AGI for AGI to happen”, but without making a tight argument[1].
You then point this resulting reasoning at your final point: “We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.”.
Also, it’s really hard to follow this, there’s things in this argument that seem to be like a triple negative.
Honestly, both my decision to read this and my subsequent performance in untangling this, makes me think I’m pretty dumb.
- ^
For example, you say that “DL is much more successful than symbolic AI because it’s closer to the human brain”, and you say this is “defeated” later. Ok. That seems fine.
Later you “defeat” the claim that:
“DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.”
You say this means:
We don’t know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.
But no one is talking about the nature of human cognition being related to AI?
This is your final point before claiming that AGI can’t come from DL or “symbol-DL”.
- ^
You seem really informed about detailed aspects of language, modelling, and seem to be an active researcher with a long career in modelling and reasoning.
I can’t fully understand or engage with your claims or posts, because I don’t actually know how AI and “symbolic logic” would work, how it reasons about anything, and really even how to start thinking about it.
Can you provide a primer of what symbolic logic/symbolic computing is, as it is relevant to AI (in any sense), and how it is supposed to work on a detailed level, i.e., so I could independently apply it to problems? (E.g. blog post, PDF chapter of a book).
(Assume your audience knows statistical machine learning, like linear classifiers, deep learning, rule based systems, coding, basic math, etc.).
It’s more like these deep learning systems are mimicking Python very well [1]. There’s no actual symbolic reasoning. You believe this...right?
Zooming out and untangling this a bit, I think the following is a bit closer to the issue?
Deep Learning (DL) is an alternative form of computation that does not involve classical symbol systems, and its amazing success shows that human intelligence is not based on classical symbolic systems. In fact, Geoff Hinton in his Turing Award Speech proclaimed that “the success of machine translation is the last nail in the coffin of symbolic AI”.
Why is this right?
There’s no reason think that any particular computational performance is connected to human intelligence. Why do you believe this? A smartphone is amazingly better than humans at a lot of tasks but that doesn’t seem to mean anything obvious about the nature of human intelligence.
Zooming out more here, it reads like there’s some sort of beef/framework/grand theory/assertion related to symbolic logic, human intelligence, and AGI that you are strongly engaged in. It reads like you got really into this theory and built up your own argument, but it’s unclear why the claims of this underlying theory are true (or even what they are).
The resulting argument has a lot of nested claims and red herrings (the Python thing) and it’s hard to untangle.
I don’t think the question of whether intelligence is pattern recognition, or symbolic logic, is the essence of people’s concerns about AGI. Do you agree or not?
- ^
I’m not sure this statement is correct or meaningful (in the context of your argument) because learning Python syntactically isn’t what’s hard, but expressing logic in Python is, and I don’t know what this expression of logic means in your theory. I don’t think you addressed it and I can’t really fill in where it fits in your theory.
- ^
Your comment isn’t a reply and reduced clarity.
This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.
Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?
Some of this text is suggesting a vision different than what I expected and I have questions. What would an alert org for AI look like?
Partially the reason I’m writing is that there was a vision for another org that looks similar, but has a different form. This org would respond much more directly to crises like Afghanistan or Ukraine. It would harness sentiment and redirect loose efforts to much more effective, coordinated activity, producing a large counterfactual increase in aid and welfare.I’m guessing the vision for this org is probably is more along the lines of what most people on the forum are thinking for a rapid response organization.
In this org that mobilizes efforts effectively, the substantive differences are:
The competencies and projects are distinct from past EA competencies and projects (quick decisions in noisy environments, organizing hundreds of people with feedback loops in hours and drawing on a lot of local competence)
The amount of work and (fairly) tangible output would build trust and create a place to recruit talent, including very strong candidates who are effective/impressive in different competencies.
This has deeper strategic value in building EA, especially in regions/countries where it isn’t established and where community building efforts have difficulty.
Created and supported by EAs, it would have a lot of real world knowledge and provides a very strong response to EA being esoteric.
A major theme of this org is proactive work, avoiding reactions to emergencies, but preparing plans and resources in advance, when a much smaller amount of resources can be much more impactful, or even reducing the size of a crises altogether. Socializing and executing this proactive viewpoint provide a great way to communicate EA ideas.
The reason this org wasn’t written up or executed (separate from time constraints), was that the org would demand a lot of attention (it’s easy to get running nominally but quality of leadership and decisions is important; the resulting activity/size of people involved is large and difficult to control and manage; many correct decisions seem unpopular and difficult to socialize; it needs to accommodate other viewpoints and pressure, including from very impressive non-EA leaders). This demand for executive attention made it less viable, but still above most other projects.
Another reason is that creating this org might be harder as some of this is harder to socialize to EAs and take plenty of focus (it’s sort of hard to explain, as there’s not that many templates for this org; momentum from some sort of early networking exercise of high status EAs has less value and is harder to achieve; initial phases are delicate, tentative investment won’t attract the kind of talent needed to drive the organization).
The ML described is basically pattern recognition.
Maybe, really good pattern recognition could produce a complete set of rules and logic. But it’s complex and unclear what the above means.
You think AI are tools and can’t have the capabilities that produce X-risk. Instead of investigating this, you pack this belief into definition of the word “symbolic” and seize on people not fully engaging with this concept. Untangling this with you seems laborious and unpromising.
The forum could use a “save comment” feature.
Your link still doesn’t work, you should fix it.
It looks like you linked to a special draft mode or something, it says edit post in the url:
Zooming out and being sort of blunt/a jerk about it: it’s sort of unpromising (especially when you’re seeking detailed advice on what presumably is complicated software) that you haven’t noticed this. This seems low effort. You want to demonstrate you can attend to details like this.
Yes, you should definitely write up a description of the software in your comment, because again, because a few sentences/paragraphs doesn’t take much time and lets technical people skim and see if it makes sense to engage. Your going to bounce out all the people with high opportunity costs.
I think your link doesn’t work. It seems good to provide a description of your desired software (a few sentences/paragraphs) and some bullet points, early in your post?
This doesn’t give a lot of information about why Putin is insecure or vulnerable to being deposed, which is key to your argument for escalation. It’s plausible that in the resulting scenario, the majority of people will resignly accept the regime and its security forces. It’s unclear what internal forces exist that would act against Putin or what incentive they would have.
In LessWrong, a user brought up comparisons with other despostic regimes which faced failure, in Iraq and North Korea (who both endured great impoverishment postwar). The leaders in those states sat in the failure comfortably. Your reply seemed vague.
Why would a response to a massive invasion of cities, be similar to enemy forces reacquiring territory to their prewar border? If defeat in Ukraine was existential, wouldn’t we see evidence of this sentiment already—instead of massive waves of fleeing men?
Last spring, I predicted that once the loss of occupied territory loomed, he would annex what he controlled and start talking about nuclear defense of Russia’s new borders – and here we are.
That outcome doesn’t seem evidence of insight, the ISW made a similar one that fits their narrative about Russian posturing[1].
The “labels” (“Finland”, “Libya”,...) seem confusing , they reference/suggest implications that seem complex and don’t clarify thinking:
In a “simmering war”, conditional on your view that the use of nukes are possible, we should expect a continued chance of nuclear war.
I’m confused how “Kosovo” is a good analogy to an end state of Russian success. Also, “Afghanistan” seems better than “Vietnam” for a Russian withdrawal.
It’s unclear why these labels or any labels were chosen, and it’s plausible they are evidence of confused reasoning, which reduces credibility.
China would also be targeted even in a US-Russia war, to prevent it from emerging as the strongest post-war economy. My guess is that such a strategy is in force today as well, given the frosty state of Sino-US relations.
It’s doubtful why you believe this.
Biden and command aren’t lonBiden and command would need to sign-off on the most gratuitous murder in human history for speculative reasons. Your target list is literally from 1956, when the US had a very different perspective and worldview.Also, China has nukes now specifically for a second strike. The US does not expect China to be content with hundreds of millions dead and maimed for no reason.
- ^
As ISW wrote in May: “The Kremlin could threaten to use nuclear weapons against a Ukrainian counteroffensive into annexed territory to deter the ongoing Western military aid that would enable such a counteroffensive
These considerations seem important. Some of them seem deep and general. Because I don’t have depth and because some involve bigger views of the world, it’s hard for me write a reply that would be really useful.
One reason I gave a US-centric article, is because that country is actively opposed to the other. So to me, a moderate take from a mainstream publication that seems clueful, carries more weight to me.
I have some moderately useful comments if you’re interested.
Some basic questions: Are you running this on GPT-NeoX-20B? If so, how are you rolling this? Are you getting technical support of some kind for training? Are you hand selecting and cleaning the data yourself?