Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Great summary, thanks.
Were this to happen, these orgs would not be seen as the appropriate ‘owners’ by most folk in mainstream AI (I say as a fan of both). Their work is not really well-known outside of EA/Bay area circles (other than people having heard of Bostrom as the ’superintelligence guy;).
One possible path would be for a high-reputation network to take on this role. E.g. something like the partnership on AI’s safety-critical AI group (which has a number of long-term safety folk on it as well as near-term safety) or something similar. The process might be normalised by focusing on reviewing/advising on risky/dual use AI research in then near-term—e.g. research that highlights new ways of doing adversarial attacks on current systems, or enables new surveillance capabilities (e.g. https://arxiv.org/abs/1808.07301). This could help set the precedents for, and establish the institutions needed for safety review for AGI-relevant research (right now I think it would be too hard to say in most cases what would constitute a ‘risky’ piece of research from an AGI perspective, given most of it for now would look like building blocks of fundamental research).
Thanks for sharing these reflections. I really appreciate them and it’s exciting to see all this progress. I think some additional context about what the Human Level AI multi-conference is would be helpful. It sounds like it was a mix of non-EA and EA AI researchers meeting together?
Mostly the former; maybe 95% / 5% or higher. Probably best to describe it as a slightly non-mainstream AI conference (in that it was focused on AGI more so than narrow AI; but had high-quality speakers from DeepMind, Facebook, MIT, DARPA etc) which some EA folk participated in.
https://www.hlai-conf.org/
This is interesting. What about them seemed EA-aligned? When I came across EA I was attracted to it because of the Singer-style act utilitarianism, and I’ve had worries that it’s drifting too far from that and losing touch with the moral urgency that I felt in the early days. That said, I do think that actually trying to practice act utilitarianism leads to more mature views that suggest being careful about pushing ourselves too far.
Probably that they expressed interest in doing the most good possible for the world with their work.
Additional reflections from Marek, CEO of GoodAI, along with links to additional media coverage, including one about whether or not to publish dangerous AI research.