Outreach can be valuable, although it is rare to have high-value opportunities. If you can publish, lecture or talk 1-on-1 with highly relevant audiences, then you may sway the Zeitgeist a little and so contribute towards getting donors or researchers on board.
Relevant audiences include:
tech moguls and other potential big donors; people who may have the potential to become or at influence those moguls.
researchers in relevant areas such as game theory; smart people in elite educational tracks who may have the potential to become or influence such researchers.
I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don’t understand much about AI. I think outreach by non-experts could end up being net-negative.
Outreach can be valuable, although it is rare to have high-value opportunities. If you can publish, lecture or talk 1-on-1 with highly relevant audiences, then you may sway the Zeitgeist a little and so contribute towards getting donors or researchers on board.
Relevant audiences include:
tech moguls and other potential big donors; people who may have the potential to become or at influence those moguls.
researchers in relevant areas such as game theory; smart people in elite educational tracks who may have the potential to become or influence such researchers.
I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don’t understand much about AI. I think outreach by non-experts could end up being net-negative.
It is very different for 1-on-1 engagement with highly relevant audiences than it is for general online discourse.
I agree with this concern, thanks. When I rewrite this post in a more finalized form I’ll include reasoning like this.