AI is often presented as a tool for efficiency and innovation, but your work highlights how power and decision-making are concentrated in a small number of actors. From your perspective, what are the most important risks this concentration poses for workers and employees across different sectors?
The concentration of capital, data, energy, land, and power in a few AI companies threatens democracy. That’s why I call them ‘empires.’ The risk to workers and employees is a loss of agency and control over their livelihoods, as well as the ability to influence the future.
We are already seeing this. AI is creating cracks in the economy: layoffs have increased while job growth has slowed. Those still working face more precarious positions, with bosses demanding higher productivity from AI—even when those tools aren’t helpful—or threatening layoffs.
By using required tools, workers provide data that AI companies can use to train models that might replace them. Employers also stand to lose. With enough data, AI companies could consume the services and products of other industries.
And what can they do about it?
We need to engage in collective action to push back against the exploitation and extraction of the ‘empires,’ and their facilitation of democratic backsliding. For workers and employees, that can mean organising to demand better labour rights and protections against AI use and automation, as we saw with the Hollywood writers’ strikes.
AI industry workers have also used their collective power to protest employer actions. For example, over 1,000 Amazon employees signed an open letter criticising leadership for an “all-costs-justified, warp-speed approach to AI development” that threatens democracy, jobs, and the earth. We need more of this.
Many employees already experience AI through automated management, performance monitoring, or decision-support systems. How do you see these technologies reshaping everyday working conditions, autonomy, and job security in the coming years?
(I will skip this question because I think I answered it in the first)
Public debates about AI frequently focus on future job losses or spectacular breakthroughs, while the lived realities of workers receive less attention. What aspects of AI’s impact on the world of work do you think are currently underestimated or misunderstood?
Recently, the US jobs report showed the economy restructuring due to the AI industry’s impact. Job growth has slowed across nearly all sectors—CEOs directly credit AI for the slowdown. One exception is data-centre construction. In my reporting, I’ve spoken to many people, especially young people, who are struggling to find work and are bombarded by job ads for data annotation. These two data points reveal an overlooked story: As the AI industry consumes the traditional economy, it profits from new waves of precarious workers, most of whom take gig and contract jobs that support the industry itself. This shows the industry’s imperial logic.
Do you think AI will destroy jobs or, as some believe, will move people to other jobs?
Past waves of automation show that some jobs disappear while others emerge—typically at higher or lower levels than before. In factories, for example, robots eliminated assembly-line jobs but created more managerial positions as output increased and more robot supervisors for handling dangerous edge cases when robots fail. In other words, automation breaks the career ladder.
We are already seeing the same thing play out with AI. Entry-level 9-to-5 jobs are disappearing, making it harder for youth to enter well-paying industries. Those who entered before have access to higher-level jobs, while many others are relegated to contract-based work with increased precarity.
Employees and trade unions often feel excluded from decisions about the design and deployment of AI systems in their workplaces. What concrete mechanisms—legal, institutional, or organisational—could help ensure that workers have a meaningful voice in how AI is introduced and governed?
Workers must organise and push management to listen. The entertainment industry offers strong examples, such as the Hollywood writers’ strike and the recent Creators Coalition on AI, an industry-wide group of actors and directors that is establishing new norms and rules for AI development, adoption, and impact.
Crucially, the Coalition takes a broad view. It focuses not only on issues like consent over creative work, but also on protecting precarious workers and planning transitions for those affected by automation. Equally important, it aims to build solidarity across industries. At launch, the Coalition invited everyone who shares its values to help realign AI with respect for humanity.
These examples show workers can build collective influence over AI-related decisions by organising for a stronger voice at the table.
Looking ahead, what would a more democratic and worker-centred approach to AI look like in practice?
Exactly as you phrased it in your question above – having governance structures that include workers in decisions about the design and deployment of AI systems at every level.
Are there examples, principles, or policy directions that give you hope that AI could be developed and used in ways that genuinely benefit employees rather than disempower them?
One thing I feel strongly about: We need to shift AI development away from the pursuit of so-called artificial general intelligence. It’s ill-defined, resource-intensive, and leads to an extractive, exploitative supply chain. This quest frames the goal as replacing humans, which can only disempower workers—and everyone else.
AI doesn’t have to be that way. Many forms of AI—especially smaller, specialised systems—can assist rather than replace humans. If we want technology to empower employees, we must start there.