How would you rate current AI labs by their bad influence or good influence? E.g. Anthropic, OpenAI, Google DeepMind, DeepSeek, xAI, Meta AI.
Suppose that the worst lab has a −100 influence on the future, for each $1 they spend. A lab half as bad, has a −50 influence on the future for each $1 they spend. A lab that’s actually good (by half as much) might have a +50 influence for each $1.
It’s possible this rating is biased against smaller labs since spending a tiny bit increases “the number of labs” by 1 which is a somewhat fixed cost. Maybe pretend each lab was scaled to the same size to avoid this bias against smaller labs.
How would you rate current AI labs by their bad influence or good influence? E.g. Anthropic, OpenAI, Google DeepMind, DeepSeek, xAI, Meta AI.
Suppose that the worst lab has a −100 influence on the future, for each $1 they spend. A lab half as bad, has a −50 influence on the future for each $1 they spend. A lab that’s actually good (by half as much) might have a +50 influence for each $1.
What numbers would you give to these labs?[1]
It’s possible this rating is biased against smaller labs since spending a tiny bit increases “the number of labs” by 1 which is a somewhat fixed cost. Maybe pretend each lab was scaled to the same size to avoid this bias against smaller labs.
(Kind of crossposted from LessWrong)