Alternative idea: AI companies should have a little checkbox saying “Please use 100% of the revenue from my subscription to fund safety research only.” This avoids some of the problems with your idea and also introduces some new problems.
I think there is a non-infinitesimal chance that Anthropic would actually implement this.
Ya, maybe. This concern/way of thinking just seems kind of niche. Probably only a very small demographic who overlaps with me here. So I guess I wouldn’t expect it to be a consequential amount of money to eg. Anthropic or OpenAI.
That check box would be really cool though. It might ease friction / dissonance for people who buy into high p(doom) or relatively non-accelerationist perspectives. My views are not representative of anyone, but me, but a checkbox like that would be a killer feature for me and certainly win my $20/mo :) . And maybe, y’know, all 100 people or whatever who would care and see it that way.
Alternative idea: AI companies should have a little checkbox saying “Please use 100% of the revenue from my subscription to fund safety research only.” This avoids some of the problems with your idea and also introduces some new problems.
I think there is a non-infinitesimal chance that Anthropic would actually implement this.
Ya, maybe. This concern/way of thinking just seems kind of niche. Probably only a very small demographic who overlaps with me here. So I guess I wouldn’t expect it to be a consequential amount of money to eg. Anthropic or OpenAI.
That check box would be really cool though. It might ease friction / dissonance for people who buy into high p(doom) or relatively non-accelerationist perspectives. My views are not representative of anyone, but me, but a checkbox like that would be a killer feature for me and certainly win my $20/mo :) . And maybe, y’know, all 100 people or whatever who would care and see it that way.