Progress or Plunder?
Opining on the false promises made by autocracy’s newest ally, AI
OpenAI's GPT-4 was trained on content scraped without consent from millions of creators, effectively committing the largest act of intellectual property theft in human history. When artists objected, they were told this was necessary for progress. When they sued, they were overwhelmed by legal motions funded by Microsoft's seemingly bottomless war chest. The same company that publishes papers on "AI alignment" simultaneously argues in court that copyright law shouldn't apply to their training methods.

The AI industry has perfected the art of presenting the displacement of the worker class as inevitable technological progress while secretly implementing a deliberate economic choice. When Goldman Sachs estimates that AI could replace 300 million full-time jobs globally, this isn't prophecy; it's a business plan. These jobs aren't disappearing because of the natural course of technological evolution; they're being eliminated because algorithmic workers are cheaper than human ones and cannot organise unions.
AI systems make life-altering decisions about credit, employment, housing, and criminal justice, yet the companies deploying them face no ethical or legal liability when these systems fail. The businesses using chatbots are protected by Section 230 safeguards and terms of service agreements when they give patients false medical information. The corporations that developed facial recognition systems cite terms of service that shield them from accountability when these algorithms mistakenly identify innocent people, resulting in false arrests. Manufacturers claim that autonomous cars are still learning when they hit people, as if using human lives as a tuition for corporate R&D is okay. Companies argue that when AI hiring tools consistently discriminate against protected classes, they are not automating historical discrimination but rather exposing patterns in past data. This is a carefully orchestrated legal architecture where they reap all benefits while externalising all risks onto the populations they claim to serve. Meanwhile, if an individual attempts to use AI to generate deepfakes or commit fraud, they face criminal prosecution. The asymmetry is grotesque and deliberate: personal liability for citizens, corporate immunity for the powerful.
The entire AI ethics apparatus exists to legitimise unprecedented corporate oversight. Tech companies fund university AI ethics centres, which produce research supporting individual algorithmic bias and undermining systemic power consolidation. We debate whether facial recognition is 2% more accurate for one demographic than another while ignoring the fundamental question: Should corporations and governments have the right to identify and track citizens without consent in the first place?
The discourse around AI ethics has become the next greatest corporate exercise since tobacco companies funded research into "safe cigarettes", while being captured by the very entities it should regulate. Tech giants convene ethics boards, publish principles, and hire philosophers to debate hypothetical trolley problems while their actual products cause measurable, documented harm right now, today, to real human beings.
The fundamental question isn't whether AI can be ethical; it's whether concentrating this much power in unaccountable corporate hands can affect and steer public interest. Every AI ethics framework proposed by tech companies assumes their continued dominance is non-negotiable and that the role of ethics is to make that dominance palatable, not to question it.
The data clearly demonstrates how corporate AI deployment follows the same pattern as every previous wave of automation. Productivity gains accrue to shareholders while workers bear adjustment costs. A 2024 MIT study found that AI adoption correlates with increased corporate profits and decreased labour share of income. The technology works exactly as designed, not to augment human capability but to replace human workers with cheaper algorithmic alternatives.
What we're witnessing is the inevitable culmination of a decades-long project to eliminate human agency from economic decision-making. Every algorithm that decides credit worthiness, every neural network that determines insurance premiums, every automated system that evaluates job applications represents a transfer of power from democratic institutions to corporate boardrooms.
The solution isn't better ethics guidelines or more diverse training data. Those are performative gestures that legitimise the underlying power structure. The solution is democratic control over technologies that affect entire populations. If AI systems are to make decisions about healthcare, employment, and justice, they should be publicly owned, transparently operated, and democratically accountable.
Instead, we're hurtling toward a future where a handful of companies control the infrastructure of decision-making itself, where algorithms determine opportunity and algorithms enforce compliance, where human judgment is systematically devalued in favour of statistical correlations that no one can explain or challenge.
The tech executives will tell you this is progress. They'll show you efficiency gains and cost reductions. They'll promise that everyone will benefit eventually, once we work out the kinks.
They're lying.
AI ethics, as currently practised, is corporate theatre designed to forestall regulation while companies consolidate power. Until we recognise this and respond accordingly, we're not having a conversation about ethics. We're watching a masterclass in manufacturing consent for our own obsolescence.
The algorithm will not save you. It was never designed to.
References:
- https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
- https://www.reuters.com/technology/amazon-used-ai-tools-track-organizers-union-drive-documents-show-2024-01-11/
- https://www.nber.org/papers/w31161
- https://www.theguardian.com/technology/2024/jan/18/meta-ai-responsible-ai-team-restructuring