I suspect that the main use of forecasting is if you need a probability for something and you don’t really have time to look into it yourself or you wouldn’t trust your judgement even if you did.
Chris Leong
I probably should have clarified that EA as a community building effort isn’t drawing the same talent. That said, talent that was attracted through community building efforts earlier has had more time to “level up” (as you mentioned) and orgs have likely improved their ability to recruit experienced professionals directly compared to the past (though my intuition is that some orgs haven’t fully appreciated the costs of weakening value-alignment since these impacts take a long time to emerge).
A further caveat: I don’t know exactly how things are at elite universities these days.
For fairness, I’ll just add a comment that the following edits were made after the competition deadline:
• “The first two are based on this post”
• “If doing what worked in other cities worked, it would have already worked”
• “—but it synergises nonetheless”I made some additional edits after it was announced this essay came in second for this question.
I’ll PM you and post my comment publically later. I’m curious if anyone else makes similar points if I don’t do so.
“Every entity in the ecosystem gets a living profile on the platform”—Given that this has a high probability of having have ecosystem wide impacts, how much analysis have you conducted of the downsides of doing this? Are you willing to share this analysis?
I notice myself feeling negative towards these laws.
This isn’t even an absolute rejection on my part, I’m just worried that governments are sliding too much towards paternalism without properly considering the downsides of their legislation.
Sounds, like it’s a recommendation, not a rule?
Do you want paragraphs or are nested dot points fine?
Btw, I just thought I should say that I really appreciate you folks writing this post. I don’t want you to think that I disliked your article just because I disagreed on one point (which is how things can sometimes come off).
I should say that relatively little management experience (largest AI Safety ANZ was when I ran it was me and Yanni part-time, a part time ops contractor and an intern), but that said key crux for me is this:
• Option 1: hire someone and severely limit their promotional potential, acknowledging the weird dynamics this might create
• Option 2: hire someone with a reasonable level of value alignment and ability to understand strategyOption 1 might work for specialist roles (ie. if an org needs an accountant, that person might be fine only ever being an accountant). It’s worth noting, that even if you do this, there’s still a cost from bringing them into the field insofar as someone else may hire them to do a role they’d be ill-suited for.
In terms of understanding strategy, it’s important to realise that different people have wildly different worldviews and ways of seeing the world. You can collapse these down to a few dot points and tell yourself that you understand the different perspectives, but you’d just be kidding yourself (I’ve made this mistake myself in the past).
I’m pretty busy, but feel free to ping me in like two weeks.
Sorry, autocomplete got me. I meant mentorship. I’ll update
“Several participants suggested that, for most generalist roles, a competent senior professional can get to a working level of AIS context in weeks”
I’m pretty skeptical of this, without a ton of individual mentorship that I doubt anyone does—or without resources that don’t currently exist. My intuition is that people who make these claims have low standards.
Int/a is still new, so my first-level analysis is that it’s okay for it to still be in the ideation phase. My second-level analysis is that AI timelines might be short maybe this phase needs to be cut short.
I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?
I agree that the future will be profoundly weird, although it’s an extra step to claim that the future will be profoundly weird in a way that change what actions animal welfare folks should take (as opposed to being weird in some orthogonal manner).
We do have other expertise that we’d be happy to trade. Many AI safety folks have proposed just this: animal welfare campaigners are experienced with guerilla campaigns that have pressured some of the world’s largest companies to make modest but meaningful concessions to ethics. We could trade these services to the AI movement, using our skills to win stronger safety and alignment commitments from leading labs, in exchange for technical safety and alignment researchers giving animals their due consideration in overall alignment strategy
I’m in favour of this proposal. I’d love to see it explored in a future post.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn’t feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.
I don’t really know.
But that’s a good point: Chesterton’s fence is a pretty good heuristic.
Probably some people were being a bit pushy advertising their services?
The framing of your question suggests EA’s role is to prescribe actions
Was I presuming this? I didn’t think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I’m not claiming this is impossible, just that it’s tricky.
I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis
I’m curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there’s all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don’t have high confidence that they could predict which ones these would be and even if you could it’d take a massive amount of time and people’s lives are pretty busy and what would they do with that knowledge anyway?
One thing that’s unclear to me is whether attempts to use AI systems to augment human capabilities in these domains is in-scope or whether the round is focused on direct enhancement of these capabilities.
I’m also curious about your opinion on whether biological-enhancement based approaches are likely to bear fruit in time to matter. Do you think it’s plausible that timelines might be long on our current path or are you more hoping that there’s a pause that provides humanity with more time?
(Alternatively, is it more that you think that we need enhanced capabilities to succeed at alignment even if current timeline projections makes this appear challenging?).