What to suggest companies & entrepreneurs do to use AI safely?

Let’s say you get the chance to talk to someone in one of the following situations:

  • An entrepreneur attempting to apply existing AI tech (e.g. ChatGPT) to a new niche or application (e.g. helping companies improve their sales messaging)

  • The CEO of a large company or nonprofit (or, really, anyone else with decision making power), who would like to use existing AI tools to make your operations more efficient, help generate text or images, etc.

Suppose, in each case, that the person is concerned about AI x-risk, since they’ve heard you or someone else mention it, but don’t realise how big a deal it might be.

What, concretely, should you suggest they do to reduce AI x-risk?

I would be very curious to hear your thoughts.


Why you might be concerned that these activities could increase AI x-risk

  1. An organisation could notice how much profit (or other value) can be generated by applying recently-developed AI, or a startup could attract significant AI hype from others (e.g. users) noticing the same

  2. Consequently, the organisation, other organisations or individuals supporting it, or people at large might (a) invest more in AI capabilities research (without investing in AI safety research sufficient to offset this), (b) be opposed to regulations slowing down or limiting the use of new AI tools

  3. Other reasons—please let me know of any you think of


Suggestions you might make, and why they don’t seem satisfactory

  1. Just don’t use AI at all.

    1. In some cases, this seems too cautious. (E.g. should no-one use ChatGPT?)

    2. It’s very unlikely to be persuasive to someone who isn’t convinced of AI risk, even if it were the decision-theoretically correct outcome under uncertainty. This is especially true if they are excited about AI.

    3. Race dynamics mean that organisations not (aggressively) using AI may be outcompeted by those that do

  2. Do not build new models; only applying existing ones

    1. This seems unlikely to change much; most in this situation would already only be applying existing models, and those planning to build new models are unlikely to be persuaded for reasons 1b and 1c

  3. Use less powerful AI wherever possible (e.g. GPT-3 instead of GPT-4)

    1. Maybe a useful suggestion

    2. But there’s probably a ‘tax’ in terms of user experience, accuracy, efficiency, etc.