I suspect that the main use of forecasting is if you need a probability for something and you don’t really have time to look into it yourself or you wouldn’t trust your judgement even if you did.
Chris Leong
AI Risk Agility Plans—v0.1
I probably should have clarified that EA as a community building effort isn’t drawing the same talent. That said, talent that was attracted through community building efforts earlier has had more time to “level up” (as you mentioned) and orgs have likely improved their ability to recruit experienced professionals directly compared to the past (though my intuition is that some orgs haven’t fully appreciated the costs of weakening value-alignment since these impacts take a long time to emerge).
A further caveat: I don’t know exactly how things are at elite universities these days.
For fairness, I’ll just add a comment that the following edits were made after the competition deadline:
• “The first two are based on this post”
• “If doing what worked in other cities worked, it would have already worked”
• “—but it synergises nonetheless”I made some additional edits after it was announced this essay came in second for this question.
I’ll PM you and post my comment publically later. I’m curious if anyone else makes similar points if I don’t do so.
“Every entity in the ecosystem gets a living profile on the platform”—Given that this has a high probability of having have ecosystem wide impacts, how much analysis have you conducted of the downsides of doing this? Are you willing to share this analysis?
Building the EA/AI Safety Scene in San Francisco
I notice myself feeling negative towards these laws.
This isn’t even an absolute rejection on my part, I’m just worried that governments are sliding too much towards paternalism without properly considering the downsides of their legislation.
Sounds, like it’s a recommendation, not a rule?
Do you want paragraphs or are nested dot points fine?
Btw, I just thought I should say that I really appreciate you folks writing this post. I don’t want you to think that I disliked your article just because I disagreed on one point (which is how things can sometimes come off).
I should say that relatively little management experience (largest AI Safety ANZ was when I ran it was me and Yanni part-time, a part time ops contractor and an intern), but that said key crux for me is this:
• Option 1: hire someone and severely limit their promotional potential, acknowledging the weird dynamics this might create
• Option 2: hire someone with a reasonable level of value alignment and ability to understand strategyOption 1 might work for specialist roles (ie. if an org needs an accountant, that person might be fine only ever being an accountant). It’s worth noting, that even if you do this, there’s still a cost from bringing them into the field insofar as someone else may hire them to do a role they’d be ill-suited for.
In terms of understanding strategy, it’s important to realise that different people have wildly different worldviews and ways of seeing the world. You can collapse these down to a few dot points and tell yourself that you understand the different perspectives, but you’d just be kidding yourself (I’ve made this mistake myself in the past).
I’m pretty busy, but feel free to ping me in like two weeks.
Sorry, autocomplete got me. I meant mentorship. I’ll update
“Several participants suggested that, for most generalist roles, a competent senior professional can get to a working level of AIS context in weeks”
I’m pretty skeptical of this, without a ton of individual mentorship that I doubt anyone does—or without resources that don’t currently exist. My intuition is that people who make these claims have low standards.
Int/a is still new, so my first-level analysis is that it’s okay for it to still be in the ideation phase. My second-level analysis is that AI timelines might be short maybe this phase needs to be cut short.
I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?
Beyond Human Wisdom: Can Man Survive the Rise of AGI?
I agree that the future will be profoundly weird, although it’s an extra step to claim that the future will be profoundly weird in a way that change what actions animal welfare folks should take (as opposed to being weird in some orthogonal manner).
One thing that’s unclear to me is whether attempts to use AI systems to augment human capabilities in these domains is in-scope or whether the round is focused on direct enhancement of these capabilities.
I’m also curious about your opinion on whether biological-enhancement based approaches are likely to bear fruit in time to matter. Do you think it’s plausible that timelines might be long on our current path or are you more hoping that there’s a pause that provides humanity with more time?
(Alternatively, is it more that you think that we need enhanced capabilities to succeed at alignment even if current timeline projections makes this appear challenging?).