INTERVIEWS : The Hidden Cost of AI

INTERVIEWS : The Hidden Cost of AI

Agora #96
20 - 23

Technology is moving forward quickly, but our ideas about work are struggling to keep pace. The future of AI is not set in stone.

Artificial intelligence storms into our lives, reshaping economies, workplaces, and democratic norms. No longer just a breakthrough, AI now raises key questions: who gains, who loses, and who sets the rules as it spreads across offices, factories, and government.

Amid discussions of productivity and innovation, another story unfolds. Workers face new job insecurities, employers manage tools they barely understand, and governments struggle to regulate technology that outpaces their rules. Employees, unions, and public sector staff—often overlooked—offer insights more valuable than bold claims about automation.

To explore these issues, we spoke with three experts studying AI’s impact on power, work, and governance. Journalist Karen Hao highlights how large AI companies reshape job markets and worker control, but notes growing collective organising. Cybersecurity expert Dr Valentin Weber warns of expanding, quiet AI workplace surveillance that threatens privacy and trust. Philosopher Simon Goldstein predicts faster job loss than in past tech cycles and urges unions and governments to prepare now.

Together, their insights show that technology is moving forward quickly, but our ideas about work are struggling to keep pace. These interviews remind us that the future of AI is not set in stone. It will depend on the decisions we make now and whether workers have a say in what happens next.

Karen Hao is an award-winning journalist, bestselling author, and MIT-trained engineer at the forefront of global AI coverage. A former Wall Street Journal foreign correspondent and senior AI editor at MIT Technology Review, she now writes for The Atlantic. She co-created the Pulitzer Center’s AI Spotlight Series and is the author of the bestseller Empire of AI, a bold, agenda-setting examination of power, politics, and the future of artificial intelligence.

AI is often presented as a tool for efficiency and innovation, but your work highlights how power and decision-making are concentrated in a small number of actors. From your perspective, what are the most important risks this concentration poses for workers and employees across different sectors?

The concentration of capital, data, energy, land, and power in a few AI companies threatens democracy. That’s why I call them ‘empires.’ The risk to workers and employees is a loss of agency and control over their livelihoods, as well as the ability to influence the future.

We are already seeing this. AI is creating cracks in the economy: layoffs have increased while job growth has slowed. Those still working face more precarious positions, with bosses demanding higher productivity from AI—even when those tools aren’t helpful—or threatening layoffs.

By using required tools, workers provide data that AI companies can use to train models that might replace them. Employers also stand to lose. With enough data, AI companies could consume the services and products of other industries.

And what can they do about it?

We need to engage in collective action to push back against the exploitation and extraction of the ‘empires,’ and their facilitation of democratic backsliding. For workers and employees, that can mean organising to demand better labour rights and protections against AI use and automation, as we saw with the Hollywood writers’ strikes.

AI industry workers have also used their collective power to protest employer actions. For example, over 1,000 Amazon employees signed an open letter criticising leadership for an “all-costs-justified, warp-speed approach to AI development” that threatens democracy, jobs, and the earth. We need more of this.

Many employees already experience AI through automated management, performance monitoring, or decision-support systems. How do you see these technologies reshaping everyday working conditions, autonomy, and job security in the coming years?

(I will skip this question because I think I answered it in the first)

Public debates about AI frequently focus on future job losses or spectacular breakthroughs, while the lived realities of workers receive less attention. What aspects of AI’s impact on the world of work do you think are currently underestimated or misunderstood?

Recently, the US jobs report showed the economy restructuring due to the AI industry’s impact. Job growth has slowed across nearly all sectors—CEOs directly credit AI for the slowdown. One exception is data-centre construction. In my reporting, I’ve spoken to many people, especially young people, who are struggling to find work and are bombarded by job ads for data annotation. These two data points reveal an overlooked story: As the AI industry consumes the traditional economy, it profits from new waves of precarious workers, most of whom take gig and contract jobs that support the industry itself. This shows the industry’s imperial logic.

Do you think AI will destroy jobs or, as some believe, will move people to other jobs?

Past waves of automation show that some jobs disappear while others emerge—typically at higher or lower levels than before. In factories, for example, robots eliminated assembly-line jobs but created more managerial positions as output increased and more robot supervisors for handling dangerous edge cases when robots fail. In other words, automation breaks the career ladder.

We are already seeing the same thing play out with AI. Entry-level 9-to-5 jobs are disappearing, making it harder for youth to enter well-paying industries. Those who entered before have access to higher-level jobs, while many others are relegated to contract-based work with increased precarity.

Employees and trade unions often feel excluded from decisions about the design and deployment of AI systems in their workplaces. What concrete mechanisms—legal, institutional, or organisational—could help ensure that workers have a meaningful voice in how AI is introduced and governed?

Workers must organise and push management to listen. The entertainment industry offers strong examples, such as the Hollywood writers’ strike and the recent Creators Coalition on AI, an industry-wide group of actors and directors that is establishing new norms and rules for AI development, adoption, and impact.

Crucially, the Coalition takes a broad view. It focuses not only on issues like consent over creative work, but also on protecting precarious workers and planning transitions for those affected by automation. Equally important, it aims to build solidarity across industries. At launch, the Coalition invited everyone who shares its values to help realign AI with respect for humanity.

These examples show workers can build collective influence over AI-related decisions by organising for a stronger voice at the table.

Looking ahead, what would a more democratic and worker-centred approach to AI look like in practice?

Exactly as you phrased it in your question above – having governance structures that include workers in decisions about the design and deployment of AI systems at every level.

Are there examples, principles, or policy directions that give you hope that AI could be developed and used in ways that genuinely benefit employees rather than disempower them?

One thing I feel strongly about: We need to shift AI development away from the pursuit of so-called artificial general intelligence. It’s ill-defined, resource-intensive, and leads to an extractive, exploitative supply chain. This quest frames the goal as replacing humans, which can only disempower workers—and everyone else.

AI doesn’t have to be that way. Many forms of AI—especially smaller, specialised systems—can assist rather than replace humans. If we want technology to empower employees, we must start there.

AI is increasingly being used for workplace surveillance. From a governance and policy perspective, how can we ensure that AI systems used for monitoring do not infringe upon workers’ privacy and freedom?

To ensure that AI monitoring systems do not infringe on employees’ rights, we must first ensure that, when used, their deployment is strictly necessary and proportional. Whenever possible, employees’ work processes should not be surveilled. Instead, a focus on work output should be prioritised. In many cases, a good output can be achieved with zero surveillance. When AI workplace monitoring tools are deployed, workers should be notified about it. If those tools are used to reprimand or fire them, they should have full insight into the data that led to this decision.

You’ve spoken about AI’s role in surveillance. How do you think AI tools for monitoring workers in public administration could affect their rights, job satisfaction, and overall work culture?

As AI tools for surveillance are rolled out, worker satisfaction will decrease. More surveillance means that employers do not trust their employees. If there is a sense that employees are not trusted, their work quality will suffer. Unions should push for a culture of trust at work. AI should empower employees, not monitor them. The major challenge with AI surveillance is that massive amounts of employee data are being processed, often without their knowledge (infringement of privacy and anonymity), and it is unknown what exactly is considered “unproductive” behaviour by AI. What is more, AI tools can be used to continuously monitor an employee’s online behaviour outside work, which is something that has been prohibitively expensive for companies until now.

Should there be global regulations governing AI surveillance in the workplace, or should this be left to individual countries or companies?

Finally, global regulations on AI workplace surveillance are desirable, but in the current geopolitical climate, they are unlikely to materialise. What might be useful, however, is for the most advanced countries to share the best practices they have developed with less advanced countries. In this way, lessons learned can diffuse quickly across the globe. In short, the onus is on countries to pass regulations on this and make sure that AI surveillance is only introduced when absolutely necessary, and even then, to a minimally invasive extent and ideally only for strictly limited time horizons

Dr Valentin Weber is a senior associate fellow at DGAP’s Centre for Geopolitics, Geoeconomics, and Technology and a China Foresight Associate at LSE IDEAS. His research focuses on cyber norms, the geopolitics of cyberspace, advanced surveillance technologies, and the intersection of cyber and national security. He has held visiting and fellowship positions at Columbia University and Harvard University’s Berkman Klein Centre, contributed to a White Paper for the US Joint Chiefs of Staff, and published in outlets including Die Zeit, Deutsche Welle, South China Morning Post, and the Associated Press. He holds a PhD in cybersecurity from the University of Oxford and studied at Sciences Po, Johns Hopkins University, and the London School of Economics.

Simon Goldstein est professeur agrégé au département de philosophie de l’université de Hong Kong, où il occupe également le poste de coordinateur des études supérieures.Il est un éminent spécialiste de la sécurité de l’IA, de l’épistémologie et de la philosophie du langage, alliant une philosophie analytique approfondie à des débats de pointe dans le domaine technologique. Formé à Yale (licence) et à Rutgers (doctorat), il explore les interactions entre l’intelligence artificielle et le raisonnement, l’éthique et la communication humains. M. Goldstein a occupé des postes de recherche au Centre for AI Safety et publié de nombreux articles sur les risques, l’éthique et la sémantique dynamique. À l’université de Hong Kong, il contribue à la recherche avancée en IA ainsi qu’à la formation et au mentorat d’étudiants diplômés du monde entier.

There’s significant concern that AI will replace many jobs. Do you think that entire job categories will disappear entirely due to automation?

AI may impact jobs differently than past technologies. Many AI labs focus on developing agents designed to directly replace human workers. The risk is job disappearance rather than mere transformation—unlike with other technologies.

The transitions could happen quickly. If an AI lab suddenly develops an AI agent that is as good as humans at a particular task (say, customer service), that agent could be deployed quickly and widely across the sector.

Why think such AI agents could be coming soon? Two recent measures are particularly relevant. First, OpenAI’s GDPval benchmark measures model performance on economically valuable tasks across 44 occupations. The newest models can perform as well as human industry experts on roughly half the tasks. Second, METR’s time horizons capture the length of software engineering tasks that AI models reliably perform. Notably, the newest models can now complete tasks that take human expert software engineers about 5 hours, achieving 50% reliability. More importantly, the time it takes AI models to complete tasks is doubling every 7 months.

Overall, we face significant uncertainty about which jobs will be automated, how many, and when automation will occur.

Trade unions have historically fought for workers’ rights against automation. How do you see trade unions playing a role in the ethical governance of AI systems in the workplace?

In the face of this uncertainty, one important role for trade unions will be to lobby governments. Their aim will be to develop economy-wide solutions addressing potentially widespread automation. Specifically, trade unions must advocate preemptively for policies that protect workers across all industries, since no single industry is differentially affected by automation risks. If automation impacts any one industry, it may be too late for unions in that sector alone to mount an effective response.

Perhaps the two most important government solutions to these problems could be strengthening unemployment insurance and developing a universal basic income. These kinds of economy-wide solutions could help the workers who will be displaced when new AI agents automate an entire sector of the economy.

Véronique Michel

About the author

Véronique Michel is an elected staff representative for IPSO, the Trade Union of the European Central Bank. She is a committed advocate for diversity and inclusion. Over the past eight years, she has organised numerous on-site and online events in her Institution, creating space for open dialogue and bringing together leading experts on a wide range of topics. Among the numerous distinguished speakers she has invited are Oscar-winning filmmaker Costa-Gavras, Nobel Prize laureates Joseph Stiglitz, Michael Spence, and Daron Acemoglu, as well as Michael Forsythe, Sander van der Linden, Mariana Mazzucato, Thomas Piketty, Laura Georges, Adam Tooze, Margrethe Vestager, and Helena Dalli.