I illustrate some other examples here on the influence of human moral values on companies. This is all of course revealed preferences, but my point is that revealed preferences can importantly reflect endorsed moral values.
People influence companies in part on the basis of what they think is right through demand, boycotts, law, regulation and other political pressure.
Companies, for the most part, can’t just go around directly murdering people (companies can still harm people, e.g. through misinformation on the health risks of their products, or because people don’t care enough about the harms). (Maybe this is largely for selfish reasons; people don’t want to be killed themselves, and there’s a slippery slope if you allow exceptions.)
GPT has content policies that reflect people’s political/moral views. Social media companies have use and content policies and have kicked off various users for harassment, racism, or other things that are politically unpopular, at least among a large share of users or advertisers (which also reflect consumers). This seems pretty standard.
Many companies have boycotted Russia since the invasion of Ukraine. Many companies have also committed to sourcing only cage-free eggs after corporate outreach and campaigns, despite cage-free egg consumption being low.
X (Twitter)’s policies on hate speech have changed under Musk, presumably primarily because of his views. That seems to have cost X users and advertisers, but X is still around and popular, so it also shows that some potentially important decisions about how a technology is used are largely in the hands of the company and its leadership, not just driven by profit.
I’d likewise guess it actually makes a difference that the biggest AI labs are (I would assume) led and staffed primarily by liberals. They can push their own views onto their AI even at the cost of some profit and market share. And some things may have minimal near term consequences for demand or profit, but could be important for the far future. If the company decides to make their AI object more to various forms of mistreatment of animals or artificial consciousness, will this really cost them tons of profit and market share? And it could depend on the markets it’s primarily used in, e.g. this would matter even less for an AI that brings in profit primarily through trading stocks.
It’s also often hard to say how much something affects a company’s profits.
I illustrate some other examples here on the influence of human moral values on companies. This is all of course revealed preferences, but my point is that revealed preferences can importantly reflect endorsed moral values.
People influence companies in part on the basis of what they think is right through demand, boycotts, law, regulation and other political pressure.
Companies, for the most part, can’t just go around directly murdering people (companies can still harm people, e.g. through misinformation on the health risks of their products, or because people don’t care enough about the harms). (Maybe this is largely for selfish reasons; people don’t want to be killed themselves, and there’s a slippery slope if you allow exceptions.)
GPT has content policies that reflect people’s political/moral views. Social media companies have use and content policies and have kicked off various users for harassment, racism, or other things that are politically unpopular, at least among a large share of users or advertisers (which also reflect consumers). This seems pretty standard.
Many companies have boycotted Russia since the invasion of Ukraine. Many companies have also committed to sourcing only cage-free eggs after corporate outreach and campaigns, despite cage-free egg consumption being low.
X (Twitter)’s policies on hate speech have changed under Musk, presumably primarily because of his views. That seems to have cost X users and advertisers, but X is still around and popular, so it also shows that some potentially important decisions about how a technology is used are largely in the hands of the company and its leadership, not just driven by profit.
I’d likewise guess it actually makes a difference that the biggest AI labs are (I would assume) led and staffed primarily by liberals. They can push their own views onto their AI even at the cost of some profit and market share. And some things may have minimal near term consequences for demand or profit, but could be important for the far future. If the company decides to make their AI object more to various forms of mistreatment of animals or artificial consciousness, will this really cost them tons of profit and market share? And it could depend on the markets it’s primarily used in, e.g. this would matter even less for an AI that brings in profit primarily through trading stocks.
It’s also often hard to say how much something affects a company’s profits.