EVAN DAVIS – MARCH 7TH, 2023

EDITOR: LARA RAMIREZ

Meet John.

John is your average Letters & Sciences student at Berkeley. He’s a freshman, doesn’t know what he wants to do with his future yet, and has to write a 1000-word essay for his English course due at midnight. Problem is–it’s 11:48 pm, and he hasn’t written a single word.

By 11:57 pm, he submits a fairly decent and grammatically correct 1000-word essay to his course’s website.

John is a fictional character. But the Artificial Intelligence (AI) which wrote the essay for him is not. ChatGPT, released by AI company OpenAI in late November 2022, took the Internet by storm with its flashy abilities. The chatbot is able to quickly compile information, write essays, write code, do complicated math, hold conversations, write short stories, and much more.

People weren’t the only ones jumping on ChatGPT. To the amusement of many, even established tech giant Google panicked and directed more of its focus towards developing AI. But, even outside the walls of Big Tech, not everyone’s attitude towards ChatGPT is one of sunshine and flowers. 

Jobs and AI

A common economic concern with technology is its ability to displace labor. The invention of cars made horses and buggies virtually obsolete. In a recent New York Times article, Paul Krugman argues that ChatGPT, and AI in general, could hurt the economy by replacing “knowledge workers” (those who are paid to do research or think); however, he does concede that new jobs will replace the lost ones in the long run.

The fallacy in Krugman’s argument is in essence similar to the fallacies present in numerous other claims he has made. He once again argues that a decrease in economic activity, in this case aggregate employment, is bad, without thinking about why these jobs are being displaced. Jobs displaced by AI are jobs AI completes more efficiently than people. AI’s completion of these tasks frees up the time the people would’ve taken to inefficiently complete these tasks, as it’s able to complete them in less time.

Jobs are not inherently good. They’re good because they do something to satisfy someone’s desires. Typically, this desire is to make a profit by producing a good or service that consumers will voluntarily pay for. Jobs are a product of a demand for skills or labor in the market; used to satisfy consumers’ wants by producing goods and services they will voluntarily pay for. As economist Walter Block argues, “we exist amidst economic scarcity and must work to live and prosper… that’s why [we should only support jobs which] produce things people actually value…” A world without jobs would be one where people’s wants are already by and large satisfied. Until we reach that world, new jobs will emerge even as old ones are phased out.

Krugman has a response. In classic Keynesian fashion he proclaims “in the long run, we are all dead.” In other words, since we will all die someday, we should focus more on the short-term consequences of technological progress than the long-term consequences. Yes, technology may make life easier and society wealthier. New jobs may come about, but it will displace current jobs people already have. This transition process from old to new jobs won’t be instantaneous. This, in Krugman’s eyes, is a bad thing.

However, the thinking underlying Krugman’s argument is extremely dangerous. Yes, it’s likely we will all die someday. It’s also true that people legitimately suffer when their jobs are rendered obsolete. Nonetheless, industries are upset and workers are displaced constantly. As economist Per Bylund points out, the market is a continuous process of disruption and innovation through entrepreneurship. According to economists who are as skeptical of capitalism as Post-Keynesian Marc Lavoie, entrepreneurship is capitalism’s greatest benefit. It provides a dynamic, destabilizing, innovative force, making production of goods more efficient and even unseating established economic power (for example, Facebook replacing the “monopoly” MySpace). Indeed, AI displacing certain jobs is an example of entrepreneurship: someone figured out how to streamline knowledge work, in addition to providing a fascinating and engaging chatbot. 

Let’s assume for a moment that entrepreneurship displacing jobs was bad. When vehicles replaced people and pack animals used to transport large amounts of cargo, was it bad? How about when machines replace workers who perform dangerous or unpleasant manual labor jobs? Is this bad? What if safer self-driving cars were to put taxi or Uber drivers out of a job? Would this be bad? At what point do we draw a line in the sand, declare we will all die someday, and stall societal progress?

How about in the case of AI? For the sake of argument, let’s cede to Paul Krugman’s (questionable, to say the least) assumption that government intervention is a cure for the business cycle. Krugman most certainly believes this to be the case. A common concern with economic policy-making when it comes to countering the business cycle is information, or recognition lag. Policy-makers take time to “accurately” identify economic problems and prescribe “solutions” to “fix” said problems. However, artificial intelligence is already being used in the Federal Reserve to more “precisely” analyze data and determine policy. In this sense, it’s replacing work which could be done by people. If it were to result in some Federal Reserve jobs being lost, should Krugman really object to this?

As for whether or not it threatens knowledge workers itself, it is true that ChatGPT’s learning capabilities are remarkable. In late December 2022, I asked ChatGPT to describe increasingly complex economic concepts. It got most right, but it failed on the most complicated and esoteric: the transformation problem. 

In essence, the transformation problem is a theoretical problem with the Marxist labor theory of value. The labor theory of value posits that an industry’s equilibrium rate of profit is determined by the ratio of labor to capital used in the production of said industry’s commodities. However, Marx, along with most economists, predicted that in equilibrium the rate of profit would equalize between industries to zero, due to competition. Since goods are produced with varying compositions of labor vs capital, it is impossible to reconcile the labor theory of value’s position on the profit rate with an equalized profit rate, be it zero or any other value. This transformation problem, when formalized mathematically, can be summarized as “the problem of finding a general rule by which to transform the “values” of commodities… into the “competitive prices” of the marketplace.”

In December 2022, ChatGPT stated that the transformation problem was “the challenge of transforming raw materials or inputs into finished goods or outputs in a way that is efficient and cost-effective…” Needless to say, this was incorrect. However, when I asked ChatGPT to describe the transformation problem in late February 2023, it replied more accurately with “the transformation of values into prices of production.” While it takes time to teach itself, it’s not unreasonable to assume there will come a point where ChatGPT’s technical knowledge of any given field exceeds that of the world’s foremost academics.

Indeed, ChatGPT and generally AI’s learning capabilities could potentially pose some threat to academic jobs. But it’s questionable how pronounced these threats will be. Empirical evidence suggests that student-teacher interactions are an integral part of learning, something extremely difficult to replicate with a chatbot, even one which can simulate human conversation. Furthermore, it’s unlikely that AI will be used as a replacement for teachers or academic researchers in the near future. More likely, it will continue being used to enhance their ability to compile information.

Furthermore, ChatGPT’s capabilities still only extend so far. In the past, it has made various mistakes, from the economics-related one above to simple mathematical errors. It’s also telling that the creators of the AI to this day have trouble controlling it and preventing misinformation (although who decides what counts as misinformation is a controversial issue). It was once possible to get ChatGPT to advocate for eating glass. Given all of this, it’s unlikely society as a whole would want to give AI complete control over research anytime soon. The evidence? AI is being integrated into research as we speak, but humans are still at the helm.

AI Beyond ChatGPT

ChatGPT certainly puts the power of artificial intelligence on display. But it’s potentially just the tip of the iceberg. Many have theorized about the eventual emergence of a technological AI singularity. In essence, a technological singularity would be when some technology, perhaps AI, surpasses human intellect and starts to teach itself. This could result in AI quickly teaching itself and becoming far more intelligent and capable than humans could ever be.

How close are we to this singularity? While ChatGPT certainly isn’t the singularity itself, some people have gone as far to predict that we will reach the singularity within a decade. Indeed, writer Gary Grossman claims that the rise of “Generative” AI like ChatGPT is a prelude to the singularity. Think of small tremors in the Earth preceding a massive earthquake. However, most predictions regarding when the singularity will come about are much more modest.

Now, it’s hard to predict exactly what singularity would entail. Computers could very well effectively abolish our economy, either by destroying it (and us) or providing us with the means to attain our desires without working.

It’s unknown what lies in store for humanity regarding the singularity. Maybe it detonates the world’s nuclear arsenal. Maybe it figures out how to cure cancer, solve climate change without more or less destroying the world’s energy base and potentially its supply lines, and effectively abolish poverty. We won’t know until it happens, but one thing is certain: uncertainty. It’s possible the next direction AI takes is one we haven’t yet imagined.

Featured Image Source: Indian Yug

Disclaimer: The views published in this journal are those of the individual authors or speakers and do not necessarily reflect the position or policy of Berkeley Economic Review staff, the Undergraduate Economics Association, the UC Berkeley Economics Department and faculty,  or the University of California, Berkeley in general.

Share this article:

Leave a Reply

Your email address will not be published. Required fields are marked *