More thoughts on “80,000 hours should remove OpenAI from the Job Board”
- - - -
A broadly robust heuristic I walk around with in my head is “the more targeted the strategy, the more likely it will miss the target”.
Its origin is from when I worked in advertising agencies, where TV ads for car brands all looked the same because they needed to reach millions of people, while email campaigns were highly personalised.
The thing is, if you get hit with a tv ad for a car, worst case scenario it can’t have a strongly negative effect because of its generic content (dude looks cool while driving his 4WD through rocky terrain). But when an email that attempts to be personalised misses the mark, you feel it (Hi {First Name : Last Name}!).
Anyway, effective altruism loves targeting. It is virtually built from the ground up on it (RCTs and all that, aversion to PR etc).
Anyway, I am thinking about this in relation to the recent “80,000 hours should remove OpenAI from the Job Board” post.
80k is applying an extremely targeted strategy by recommending people work in some jobs at the workplaces creating the possible death machines (caveat 1, caveat 2, caveat 3, caveatn).
There are so many failure modes involved with those caveats that makes it virtually impossible to not go catastrophically poorly in at least a few instances and take the mental health of a bunch of young and well-meaning EAs with it.
More thoughts on “80,000 hours should remove OpenAI from the Job Board”
- - - -
A broadly robust heuristic I walk around with in my head is “the more targeted the strategy, the more likely it will miss the target”.
Its origin is from when I worked in advertising agencies, where TV ads for car brands all looked the same because they needed to reach millions of people, while email campaigns were highly personalised.
The thing is, if you get hit with a tv ad for a car, worst case scenario it can’t have a strongly negative effect because of its generic content (dude looks cool while driving his 4WD through rocky terrain). But when an email that attempts to be personalised misses the mark, you feel it (Hi {First Name : Last Name}!).
Anyway, effective altruism loves targeting. It is virtually built from the ground up on it (RCTs and all that, aversion to PR etc).
Anyway, I am thinking about this in relation to the recent “80,000 hours should remove OpenAI from the Job Board” post.
80k is applying an extremely targeted strategy by recommending people work in some jobs at the workplaces creating the possible death machines (caveat 1, caveat 2, caveat 3, caveatn).
There are so many failure modes involved with those caveats that makes it virtually impossible to not go catastrophically poorly in at least a few instances and take the mental health of a bunch of young and well-meaning EAs with it.