DM me if you want to talk about probabilistic modelling (of policies/tech progress/etc)
Got involved with EA in 2017
DM me if you want to talk about probabilistic modelling (of policies/tech progress/etc)
Got involved with EA in 2017
Have you got a link to where this excerpt came from?
I don’t think this disclosure shows that much awareness, as the notes seem to dismiss it as a problem, unless I’m misunderstanding what Holden means by “don’t assume things about my takes on specific AI labs due to this”. It sounds like he’s claiming he’s able to assess these things neutrally, which is quite a big claim!
Why is this getting downvoted? This comment seems plainly helpful; it’s an important thing to highlight.
I can see why some people think the publicity effects of the letter might be valuable, but — when it comes to the 6-month pause proposal itself — I think Matthew’s reasoning is right.
I’ve been surprised by how many EA folk are in favour of the actual proposal, especially given that AI governance literature often focuses on the risks of fuelling races. I’d be keen to read people’s counterpoints to Matthew’s thread(s); I don’t think many expect GPT-5 will pose an existential threat, and I’m not yet convinced that ‘practice’ is a good enough reason to pursue a bad policy.
Fair enough! It could be useful, so I’d be happy to be wrong here.
I don’t think gossip ought to be that public or legible.
Firstly, I don’t think it would work for achieving your goals; I would still hesitate about having my opinions uploaded without feeling very confident in them (rumours are powerful weapons and I wouldn’t want to start one if I was uncertain).
Secondly, I don’t think it’s worth the costs of destroying trust. A whole bunch more people will distance themselves from EA if they know their public reputation is on the line with every interaction. (I also agree with Lawrence on the Slack leaks, FWIW).
I see why you might want public info (akin to scandal markets) when people are more high-profile, but I don’t think Sam Bankman-Fried would have passed that bar in 2018.
I think the main problem being faced again and again is that internal reporting lacks teeth.
I think public reporting is an inadequate alternative. It’s a big demand to ask people to become public whistleblowers, especially since most things worth reporting aren’t always black and white. It’s hard to publicly speak out about things if you’re not certain about them (eg because of self-doubt, wondering if it’s even worth bothering, creating a reputation for yourself, etc).
Additionally, the subsequent discourse seems to put additional burden on those speaking out. If I spoke up about something just to see a bunch of people doubt what I’ve said is true (or, like in previous cases, have to engage with the wrongdoer and proofread their account of events) I’d probably regret my choice.
Okay great, that makes sense to me. Thank you very much for the clarification!
I am unsure what you mean by AGI. You say:
For purposes of our definitions, we’ll count it as AGI being developed if there are AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in such a world [where cheap AI systems are fully substitutable for human labor].
and:
causing human extinction or drastically limiting humanity’s future potential may not show up as rapid GDP growth, but automatically counts for the purposes of this definition.
If someone uses AI capabilities to create a synthetic virus (which they wouldn’t have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as “AGI being developed”?
My instinct is that this should not be considered to be AGI — since it is the result of just narrow AI and a human. However the caveat implies that it would count, because an AI system would have powered human extinction.
I get the impression you want to count ‘comprehensive AI systems’ as AGI if the system is able to act ~autonomously from humans[1]. Is that correct?
Putting it another way:
If there is a company powered employs both humans and lots of AI technologies and it brings about a “profound transformation (in economic terms or otherwise)” , I assume the combined capability of the AI-elements of the company should be equivalently general as a single AGI would be to count.
If it does not sum up to that level of generality, but is still used to bring about a transformation, I think that it should not resolve ‘AGI developed’ positively. However, it currently looks like it would resolve it positively.
Thanks for this!
For others, as well as fixing/removing the misplaced percent symbol, you also need to do the following:
In a new tab, type or paste about:config in the address bar and press Enter/Return. Click the button accepting the risk.
In the search box above the list, type or paste userprof and pause while the list is filtered. If you do not see anything on the list, please ignore the rest of these instructions. You can close this tab now.
Double-click the toolkit.legacyUserProfileCustomizations.stylesheets preference to switch the value from false to true.
I can see this getting a bit annoying/confusing, as it also blocks out commenters’ usernames, but you can always hover over the empty space and read it from the link preview on the bottom-left of the window.
A great article on this, for those who haven’t read it yet: What the EA community can learn from the rise of the neoliberals | Effective Altruism
I enjoyed reading these updated thoughts!
A benefit of some of the agency discourse, as I tried to articulate in this post, is that it can foster a culture of encouragement. I think EA is pretty cool for giving people the mindset to actually go out and try to improve things; tall poppy syndrome and ‘cheems mindsets’ are still very much the norm in many places!
I think a norm of encouragement is distinct from installing an individualistic sense of agency in everyone, though. The former should reduce the chances of Goodharting, since you’ll ideally be working out your goals iteratively with likeminded people (mitigating the risk of single-mindedly pursuing an underspecified goal). It’s great to have conviction — but conviction in everything you do by default could stop you from finding the things you really believe in.
I would happily vouch for the value of these events, as an attendee of the York group. They’re fun, engaging, and definitely give an opportunity for members to dive into EA concepts.
It’s just fun to hang out with a group of engaged EAs in nice cafés regularly (with interesting topics to talk about)!
Thanks, here’s the link for others: https://forecasting.substack.com/p/alert-minutes-for-week-172024