The macroeconomics of Artificial Intelligence (III)
Featured

The macroeconomics of Artificial Intelligence (III)

A study of 5,000 workers who do complex customer assistance jobs at a call center found that among workers who were given the support of an AI assistant, the least skilled or newest workers showed the greatest productivity gains (Brynjolfsson, Li, and Raymond 2023).

Advertisement

If employers shared these gains with workers, distribution of income would become more equal.

In addition to creating a future of lower income inequality, AI may help labor in another more subtle, but profound, sense. 

If AI is a substitute for the most routine and formulaic kinds of tasks, then by taking tedious routine work off human hands, AI may complement genuinely creative and interesting tasks, improving the basic psychological experience of work, as well as the quality of output. 

Indeed, the call center study found not only productivity gains, but reduced worker turnover and increased customer satisfaction for those using the AI assistant. 

Third fork: Industrial concentration   

Since the early 1980s, industrial concentration—which measures the collective market share of the largest firms in a sector—has risen dramatically in the United States and many other advanced economies. 

These large superstar firms are often much more capital-intensive and technologically sophisticated than their smaller counterparts.

There are again two divergent scenarios for the impact of AI.

Higher-concentration future

In the first scenario, industrial concentration increases, and only the largest firms intensively use AI in their core business. AI enables these firms to become more productive, profitable, and larger than their competitors. 

AI models become ever more expensive to develop, in terms of raw computational power—a massive up-front cost that only the largest firms can afford—in addition to requiring training on massive datasets, which very large firms already have from their many customers and small firms do not. Moreover, after an AI model is trained and created, it can be expensive to operate. 

For example, the GPT-4 model cost more than $100 million to train during its initial development and requires about $700,000 a day to run. 

The typical cost of developing a large AI model may soon be in the billions of dollars. Executives at the leading AI firms predict that the scaling laws that show a strong relationship between increases in training costs and improved performance will hold for the foreseeable future, giving an advantage to the companies with access to the biggest budgets and the biggest datasets.

It may be, then, that only the largest firms and their business partners develop proprietary AI—as firms such as Alphabet, Microsoft, and OpenAI have already done and smaller firms have not. The large firms then get larger.

More subtly, but perhaps more important, even in a world in which proprietary AI does not require a large fixed cost that only the largest firms can afford, AI might still disproportionately benefit the largest firms, by helping them better internally coordinate their complex business operations—of a kind that smaller and simpler firms do not have. 

The “visible hand” of top management managing resources inside the largest firms, now backed by AI, allows the firm to become even more efficient, challenging the Hayekian advantages of small firms’ local knowledge in a decentralized market.

Lower-concentration future

In the lower-industrial-concentration future, however, open-source AI models (such as Meta’s LLaMA or Berkeley’s Koala) become widely available.

A combination of for-profit companies, nonprofits, academics, and individual coders create a vibrant open-source AI ecosystem that enables broad access to developed AI models. 

This gives small businesses access to industry-leading production technologies they could never have had before.

Much of this was foreshadowed in an internal memo leaked from Google in May 2023, in which a researcher said that “open-source models are faster, more customizable, more private, and pound-for-pound more capable” than proprietary models. 

The researcher said that processes in small open-source models can be quickly repeated by many people and end up better than large private models that are slowly iterated by a single team and that open-source models can be trained more cheaply. 

Advertisement

In the Google researcher’s view, open-source AI may end up dominating the expensive proprietary models.

Connect With Us : 0242202447 | 0551484843 | 0266361755 | 059 199 7513 |

Like what you see?

Hit the buttons below to follow us, you won't regret it...

0
Shares