I think it might be cool if an AI Safety research organization ran a copy of an open model or something and I could pay them a subscription to use it. That way I know my LLM subscription money is going to good AI stuff and not towards the stuff that AI companies that I donât think I like or want more of on net.
Idk, existing independent orgs might not be the best place to do this bc it might âdamn themâ or âcorrupt themâ over time. Like, this could lead them to âselling outâ in a variety of ways you might conceive of that.
Still, I guess I am saying that to the extent anyone is going to actually âmake moneyâ off of my LLM usage subscriptions, it would be awesome if it were just a cool independent AIS lab I personally liked or similar.
(I donât really know the margins and unit economics which seems like an important part of this pitch lol).
Like, if âGoodGuy AIS Labâ sets up a little website and inference server (running Qwen or Llama or whatever) then I could pay them the $15-25 a month I may have otherwise paid to an AI company. The selling point would be that less âmoral hazardâ is better vibes, but probably only some people would care about this at all and it would be a small thing. But also, itâs hardly like a felt sense of moral hazard around AI is a terribly niche issue.
This isnât the âfinal formâ of this I have in mind necessarily; I enjoy picking at ideas in the space of âwhat would a good guy AGI project doâ or âhow can you do neglected AIS /â âAI go wellâ research in a for-profit wayâ.
I also like the idea of an explicitly fast follower project for AI capabilities. Like, accelerate safety/âsecurity relevant stuff and stay comfortably middle of the pack on everything else. I think improving GUIs is probably fair game too, but not once it starts to shade into scaffolding I think? I wouldnât know all of the right lines to draw here, but I really like this vibe.
This might not work well if you expect gaps to widen as RSI becomes a more important input. I would argue that seems too galaxy brained given that, as of writing, we do live in a world with a lot of mediocre AI companies that I believe can all provide products of ~comparable quality.
It is also just kind of a bet that in practice it is probably going to remain a lot less expensive to stay a little behind the frontier than to be at the frontier. And that, in practice, it may continue to not matter in a lot of cases.
fwiw I think you shouldnât worry about paying $20/âmonth to an evil company to improve your productivity, and if you want to offset it I think a $10/âyear donation to LTFF would more than suffice.
Can you say more on why you think a 1:24 ratio is the right one (as opposed to lower or higher ratios)? And how might this ratio differ for people who have different beliefs than you, for example about xrisk, LTFF, or the evilness of these companies?
I havenât really thought about it and Iâm not going to. If I wanted to be more precise, Iâd assume that a $20 subscription is equivalent (to a company) to finding a $20 bill on the ground, assume that an Δ% increase in spending on safety cancels out an Δ% increase in spending on capabilities (or think about it and pick a different ratio), and look at money currently spent on safety vs capabilities. I donât think P(doom) or company-evilness is a big crux.
Alternative idea: AI companies should have a little checkbox saying âPlease use 100% of the revenue from my subscription to fund safety research only.â This avoids some of the problems with your idea and also introduces some new problems.
I think there is a non-infinitesimal chance that Anthropic would actually implement this.
Ya, maybe. This concern/âway of thinking just seems kind of niche. Probably only a very small demographic who overlaps with me here. So I guess I wouldnât expect it to be a consequential amount of money to eg. Anthropic or OpenAI.
That check box would be really cool though. It might ease friction /â dissonance for people who buy into high p(doom) or relatively non-accelerationist perspectives. My views are not representative of anyone, but me, but a checkbox like that would be a killer feature for me and certainly win my $20/âmo :) . And maybe, yâknow, all 100 people or whatever who would care and see it that way.
I think it might be cool if an AI Safety research organization ran a copy of an open model or something and I could pay them a subscription to use it. That way I know my LLM subscription money is going to good AI stuff and not towards the stuff that AI companies that I donât think I like or want more of on net.
Idk, existing independent orgs might not be the best place to do this bc it might âdamn themâ or âcorrupt themâ over time. Like, this could lead them to âselling outâ in a variety of ways you might conceive of that.
Still, I guess I am saying that to the extent anyone is going to actually âmake moneyâ off of my LLM usage subscriptions, it would be awesome if it were just a cool independent AIS lab I personally liked or similar. (I donât really know the margins and unit economics which seems like an important part of this pitch lol).
Like, if âGoodGuy AIS Labâ sets up a little website and inference server (running Qwen or Llama or whatever) then I could pay them the $15-25 a month I may have otherwise paid to an AI company. The selling point would be that less âmoral hazardâ is better vibes, but probably only some people would care about this at all and it would be a small thing. But also, itâs hardly like a felt sense of moral hazard around AI is a terribly niche issue.
This isnât the âfinal formâ of this I have in mind necessarily; I enjoy picking at ideas in the space of âwhat would a good guy AGI project doâ or âhow can you do neglected AIS /â âAI go wellâ research in a for-profit wayâ.
I also like the idea of an explicitly fast follower project for AI capabilities. Like, accelerate safety/âsecurity relevant stuff and stay comfortably middle of the pack on everything else. I think improving GUIs is probably fair game too, but not once it starts to shade into scaffolding I think? I wouldnât know all of the right lines to draw here, but I really like this vibe.
This might not work well if you expect gaps to widen as RSI becomes a more important input. I would argue that seems too galaxy brained given that, as of writing, we do live in a world with a lot of mediocre AI companies that I believe can all provide products of ~comparable quality.
It is also just kind of a bet that in practice it is probably going to remain a lot less expensive to stay a little behind the frontier than to be at the frontier. And that, in practice, it may continue to not matter in a lot of cases.
fwiw I think you shouldnât worry about paying $20/âmonth to an evil company to improve your productivity, and if you want to offset it I think a $10/âyear donation to LTFF would more than suffice.
Can you say more on why you think a 1:24 ratio is the right one (as opposed to lower or higher ratios)? And how might this ratio differ for people who have different beliefs than you, for example about xrisk, LTFF, or the evilness of these companies?
I havenât really thought about it and Iâm not going to. If I wanted to be more precise, Iâd assume that a $20 subscription is equivalent (to a company) to finding a $20 bill on the ground, assume that an Δ% increase in spending on safety cancels out an Δ% increase in spending on capabilities (or think about it and pick a different ratio), and look at money currently spent on safety vs capabilities. I donât think P(doom) or company-evilness is a big crux.
Alternative idea: AI companies should have a little checkbox saying âPlease use 100% of the revenue from my subscription to fund safety research only.â This avoids some of the problems with your idea and also introduces some new problems.
I think there is a non-infinitesimal chance that Anthropic would actually implement this.
Ya, maybe. This concern/âway of thinking just seems kind of niche. Probably only a very small demographic who overlaps with me here. So I guess I wouldnât expect it to be a consequential amount of money to eg. Anthropic or OpenAI.
That check box would be really cool though. It might ease friction /â dissonance for people who buy into high p(doom) or relatively non-accelerationist perspectives. My views are not representative of anyone, but me, but a checkbox like that would be a killer feature for me and certainly win my $20/âmo :) . And maybe, yâknow, all 100 people or whatever who would care and see it that way.