
Writer: Richard Li
Editor: Izzy Bayani
In a Berkeley Economic Review article from 2018, Vatsal Bajaj anticipates that artificial intelligence may create a “new social class.” Drawing on comparisons to the Industrial Revolution and citing historian Yuval Noah Harari’s theory on the creation of a “useless class,” he surveys a range of views on the large-scale impacts of automation by artificial intelligence. Seven years on, this discussion is more salient than ever. The world’s most valuable companies are locked in an AI arms race, and predictions on when we will reach artificial general intelligence (when AI matches human cognitive capabilities) run rampant.
Major news outlets report mounting pressure on young people, especially recent and prospective college graduates, struggling to find jobs in the software sector that once teemed with demand for new hires. Rose Horowitch of The Atlantic notes that this ideal is now past its prime, and the number of graduates in computer science has declined at many universities. Natasha Singer of The New York Times confirms this with anecdotes of recent college graduates struggling to find work, describing their experiences as “soul-crushing.” Developments from the Federal Reserve Bank of New York (2023) indicate that computer science and computer engineering graduates have among the highest unemployment rates across all majors.
The common line of reasoning is as follows: the adoption of generative AI tools leads to automation of routine tasks, which are often the primary work of entry-level developers. In turn, this allows companies to achieve the same or greater output with fewer junior hires, resulting in a decline in job postings for entry-level tech roles.
The New York Times article mentioned above is provocatively titled “Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.” Upon first glance, it is easy to extrapolate and assert that high-paying or desirable jobs are now increasingly and irreversibly scarce, leading to a conclusion akin to what economists often cite as the “lump of labor fallacy.” Standard economic reasoning argues that this is false for two reasons: first, when an economy loses jobs in one industry, it spurs growth in another, and second, the pie of labor is not fixed. This theoretical comfort, however, directly conflicts with the experiences of many recent college graduates. For an American public that is highly wary of AI, hearing corporate voices such as Ford CEO Jim Farley say that AI is “going to replace literally half of all white-collar workers” may render any argument about economic history, any theory, or empirics hard to swallow.
In a paper titled “The Labor Market Effects of Generative Artificial Intelligence,” Jonathan Hartley and coauthors convey the exact kind of message that is difficult to frame in the present moment—AI is more of a complement than a substitute for labor. The authors find evidence consistent with AI “raising worker productivity and earnings” in the short term. Related work by researchers at the Stanford Digital Economy Lab has similarly been unable to draw significant causal conclusions relating artificial intelligence to job markets. However, the authors recognize worrying phenomena such as a significant decline of young workers’ employment in AI–exposed jobs after controlling for firm- and industry-level shocks, and note that since ChatGPT’s public release, employment growth among workers aged 22–25 has stalled even while economy-wide employment continued to rise.
These facts suggest that fewer young people can reach the bottom rung of the career ladder, an idea explored by LinkedIn executive Aneesh Raman, commentator Ezra Klein, the World Economic Forum, and many others. Quite ironic, given that younger people have been typically associated with higher proficiency with emerging technologies, and that research shows younger Americans have greater awareness of AI’s presence in daily life than their older counterparts.
Nearly a century ago, John Maynard Keynes argued in his essay “Economic Possibilities for Our Grandchildren” that the massive changes in industrial and agricultural productivity of his time were bringing about “technological unemployment.” He frames this as a growing pain in a journey toward humankind “solving its economic problem,” a stepping stone to the promised land of “three-hour shifts or a fifteen-hour week.” He distinguishes between two types of human needs: “absolute” needs that are independent of others, and “relative” needs that are propelled by status and peer comparison. Keynes argues that as material abundance grows, we would entirely meet our absolute needs and turn our attention to purely resolving the relative needs that he associates with being “non-economic.” Hence, we would be able to fully engage in this pursuit even while maintaining comfort by working fewer hours.
By this line of reasoning, the struggles of newly graduated, prospective white-collar workers would be an example of “technological unemployment,” a necessary evil on the journey to a society of abundance. Yet in the transformative near-century since the essay’s publication, the economy has already experienced tremendous productivity growth, and people generally work less than ever, yet nowhere close to the extent of Keynes’s predictions. The trend has nearly plateaued over the past few decades, even amidst significant changes in technology.
Critiques of Keynes’s essay usually point to the fact that, unlike Keynes’s assumption about absolute needs, our actual desires for consumption are insatiable and thus retain the necessity of working significant hours. Others have pointed to work becoming more enjoyable over time, and more arguments have been made about firm incentives and the rising real costs of housing, education, healthcare, and more.
Though Keynes was wrong about the fifteen-hour work week, so far history has proven him to be correct in anticipating that technological unemployment would merely be a “transient period of maladjustment.” With the advent of artificial general intelligence on the horizon, however, will the dream of abundance, “economic possibilities,” and unbounded improvements in the standard of living continue to live?
In their 2024 working paper, “Scenarios for the Transition to AGI,” Anton Korinek and Donghyun Suh attempt to answer this question by modeling human work as “atomistic tasks” across a complexity spectrum. They assume humans can, in principle, perform any task, while automation applies to tasks below a moving complexity “frontier.” If human task complexity has an upper bound and automation continues rising, wages eventually fall as the automation frontier approaches that bound. At this threshold, human labor is no longer scarce and instead becomes interchangeable with capital inputs such as equipment and compute (the computational resources available for information processing) for producing additional output.
Yet if there is no upper bound on human task complexity, Korinek and Suh’s model shows that the demand for human labor can remain constant while wages continue to rise. They consider the existence of “nostalgic jobs,” including caregivers, priests, and judges, that we may choose to designate to humans even if they are performed better through automation, and they claim that tasks with higher complexity will continue to emerge.

Graphic By: Sharvani Andurlekar
In this framework, today’s entry-level workers and new graduates are first in line to be displaced. It leaves the next generation in a precarious position, where their careers are contingent on incumbent human labor shifting into newly emerging, higher-complexity tasks to create entry points. This all depends on an unpredictable race between automation and the emergence of entirely new forms of work. If this race goes the wrong way, the struggles of today’s graduates may not be the “transient period of maladjustment” Keynes described, but rather a harbinger of a “new social class” unable to find its first footing on the career ladder.
Featured Image by Nahrizul Kadri on Unsplash
