Interesting post, thanks for sharing. Some rambly thoughts:[1]
I’m sympathetic to the claim that work on digital minds, AI character, macrostrategy, etc is of similar importanceto AI safety/AI governance work. However, I think they seem much harder to work on — the fields are so nascent and the feedback loops and mentorship even scarcer than in AIG/AIS, that it seems much easier to have zero or negative impact by shaping their early direction poorly.
I wouldn’t want marginal talent working on these areas for this reason. It’s plausible that people who are unusually suited to this kind of abstract low-feedback high-confusion work, and generally sharp and wise, should consider it. But those people are also well-suited to high leverage AIG/AIS work, and I’m uncertain whether I’d trade a wise, thoughtful person working on AIS/AIG for one thinking about e.g. AI character.
(We might have a similar bottom line: I think the approach of “bear this in mind as an area you could pivot to in ~3-4y, if better opportunities emerge” seems reasonable.)
Relatedly, I think EAs tend to overrate interesting speculative philosophy-flavoured thinking, because it’s very fun to the kind of person who tends to get into EA. (I’m this kind of person too :) ). When I try to consciously correct for this, I’m less sure that the neglected cause areas you mention seem as important.
I’m worried about motivated reasoning when EAs think about the role of EA going forwards. (And I don’t think we should care about EA qua EA, just EA insofar as it’s one of the best ways to make good happen.) So reason #2 you mention, which felt more like going “hmm EA is in trouble, what can we do?” rather than reasoning from “how do we make good happen?” wasn’t super compelling to me.
That being said, If it’s cheap to do so, more EA-flavoured writing on the Forum seems great! The EAF has been pretty stale. I was brainstorming about this earlier —initially I was worried about the chilling effect of writing so much in public (commenting on the EAF is way higher effort for me than on google docs, for example), but I think some cool new ideas can and probably should be shared more. I like Redwood’s blog a lot partly for this reason.
In my experience at university, in my final 2 years the AI safety group was just way more exciting and serious and intellectually alive than the EA group — this is caricatured, but one way of describing it would be that (at extremes) the AI safety group selected for actually taking ideas seriously and wanting to do things, and the EA group correspondingly selected for wanting to pontificate about ideas and not get your hands dirty. I think EA groups engaging with more AGI preparedness-type topics could help make them exciting and alive again, but it would be important imo to avoid reinforcing the idea that EA groups are for sitting round and talking about ideas, not for taking them seriously. (I’m finding this hard to verbalise precisely —I think the rough gloss is “I’m worried about these topics having more of a vibe of ‘interesting intellectual pastime’, and if EA groups tend towards that vibe anyway, making discussing them feel ambitious and engaging and ‘doing stuff about ideas’-y sounds hard”.
I would have liked to make this more coherent and focused, but that was enough time/effort that realistically I just wouldn’t have done it, and I figured a rambly comment was better than no comment.
Yes, at least initially. (Though fwiw my takeaway from that was more like, “it’s interesting that these people wanted to direct their energy towards AI safety community building and not EA CB; also, yay for EA for spreading lots of good ideas and promoting useful ways of looking at problems”. This was in 2022, where I think almost everyone who thought about AI safety heard about it via EA/rationalism.)
What sort of things did the AIS group do that gave the impression they were taking ideas more seriously? Was it more events surrounding taking action (e.g. Hackathons)? Members engaging more with the ideas in the time outside of the club meetings? More seriousness in reorienting their careers based on the ideas?
Interesting post, thanks for sharing. Some rambly thoughts:[1]
I’m sympathetic to the claim that work on digital minds, AI character, macrostrategy, etc is of similar importance to AI safety/AI governance work. However, I think they seem much harder to work on — the fields are so nascent and the feedback loops and mentorship even scarcer than in AIG/AIS, that it seems much easier to have zero or negative impact by shaping their early direction poorly.
I wouldn’t want marginal talent working on these areas for this reason. It’s plausible that people who are unusually suited to this kind of abstract low-feedback high-confusion work, and generally sharp and wise, should consider it. But those people are also well-suited to high leverage AIG/AIS work, and I’m uncertain whether I’d trade a wise, thoughtful person working on AIS/AIG for one thinking about e.g. AI character.
(We might have a similar bottom line: I think the approach of “bear this in mind as an area you could pivot to in ~3-4y, if better opportunities emerge” seems reasonable.)
Relatedly, I think EAs tend to overrate interesting speculative philosophy-flavoured thinking, because it’s very fun to the kind of person who tends to get into EA. (I’m this kind of person too :) ). When I try to consciously correct for this, I’m less sure that the neglected cause areas you mention seem as important.
I’m worried about motivated reasoning when EAs think about the role of EA going forwards. (And I don’t think we should care about EA qua EA, just EA insofar as it’s one of the best ways to make good happen.) So reason #2 you mention, which felt more like going “hmm EA is in trouble, what can we do?” rather than reasoning from “how do we make good happen?” wasn’t super compelling to me.
That being said, If it’s cheap to do so, more EA-flavoured writing on the Forum seems great! The EAF has been pretty stale. I was brainstorming about this earlier —initially I was worried about the chilling effect of writing so much in public (commenting on the EAF is way higher effort for me than on google docs, for example), but I think some cool new ideas can and probably should be shared more. I like Redwood’s blog a lot partly for this reason.
In my experience at university, in my final 2 years the AI safety group was just way more exciting and serious and intellectually alive than the EA group — this is caricatured, but one way of describing it would be that (at extremes) the AI safety group selected for actually taking ideas seriously and wanting to do things, and the EA group correspondingly selected for wanting to pontificate about ideas and not get your hands dirty. I think EA groups engaging with more AGI preparedness-type topics could help make them exciting and alive again, but it would be important imo to avoid reinforcing the idea that EA groups are for sitting round and talking about ideas, not for taking them seriously. (I’m finding this hard to verbalise precisely —I think the rough gloss is “I’m worried about these topics having more of a vibe of ‘interesting intellectual pastime’, and if EA groups tend towards that vibe anyway, making discussing them feel ambitious and engaging and ‘doing stuff about ideas’-y sounds hard”.
I would have liked to make this more coherent and focused, but that was enough time/effort that realistically I just wouldn’t have done it, and I figured a rambly comment was better than no comment.
Was the AIS group led by people that had EA values or were significantly involved with EA?
Yes, at least initially. (Though fwiw my takeaway from that was more like, “it’s interesting that these people wanted to direct their energy towards AI safety community building and not EA CB; also, yay for EA for spreading lots of good ideas and promoting useful ways of looking at problems”. This was in 2022, where I think almost everyone who thought about AI safety heard about it via EA/rationalism.)
What sort of things did the AIS group do that gave the impression they were taking ideas more seriously? Was it more events surrounding taking action (e.g. Hackathons)? Members engaging more with the ideas in the time outside of the club meetings? More seriousness in reorienting their careers based on the ideas?
Mostly the latter two, yeah