Introduction
This article analyzes the AI scaling doctrine, exposing it as a normative civilizational program. It promises improved model quality as resources are amplified, leading to a concentration of power and the marginalization of alternatives. This race generates immense social and environmental costs, imposing an epistemology of correlation that flattens the human experience. Without conscious control, scaling will become an imperial project threatening pluralism and democracy.
The scaling doctrine: an engineering metaphysics of power
From an engineering perspective, this refers to the decrease in error as data and power increase. Culturally, it is a belief in emergence—the moment when quantity generates new, qualitative cognitive abilities in machines.
OpenAI: a digital empire of a new formation
OpenAI operates as an imperial formation that transcends the boundaries of politics and economics, colonizing resources under the banner of the inevitable arrival of AGI.
OpenAI’s Law: the monopolization of technological development
This principle assumes a doubling of computing power every few months. In this logic, compute is king, and the growth of parameters becomes the sole measure of civilizational meaning.
Scaling cannibalizes alternative research programs
The concentration of capital around scaling stifles other paths, such as symbolic AI. An example is the suspension of drug development research to free up chips for conversational models.
Iterative AI deployment: a global social experiment
This is a data flywheel strategy where users act as unwitting testers. Product and experiment become one, shifting responsibility to a logic of fixing errors after the fact.
RLHF: exploitation of workers in the Global South
Behind the sterile interface lies an affective proletariat. Workers from Kenya or the Philippines filter traumatic content for pennies to protect the comfort of users in the Global North.
Training large models: the hidden environmental cost
Scaling consumes millions of gallons of water and vast amounts of energy. This is infrastructural imperialism, exporting ecological costs to regions plagued by drought and poverty.
USA, Europe, and Arab nations: geopolitical visions of AI
The US focuses on the market and messianism, Europe on regulations protecting dignity, and Gulf countries treat scaling as a tool for building new state power.
Data compression: the reproduction of bias in AI models
Models distill the internet's "data swamp." Consequently, they compress symbolic violence and bias, presenting statistical probability as objective truth about the world.
Dismantling the metaphysics of scale: new research paradigms
Scaling must be demoted to the role of a mere tool. Progress must be subordinated to external norms: environmental justice and cognitive pluralism.
Democratizing deployment: bottom-up technological control
Deploying high-risk models requires public oversight, pilot phases, and real experiment stop-thresholds in the event of social harm.
Training data: the digital commons of humanity
The internet is not a free mine. Data should be treated as a commons, requiring licensing systems and fair compensation for creators.
Humor and absurdity: tools for unmasking AI power
Humor exposes the gap between the bombastic narrative of world salvation and the triviality of AI applications, such as generating memes or grocery lists.
The trillion-dollar bet: an elite gamble on the future of the economy
Elites are betting that productivity gains will offset social costs. It is a risky gamble where profits are private, and costs are systematically socialized.
A normative program: pillars of an ethical AI future
Key pillars include: the end of monopolies, recognition of data rights, full environmental cost accounting, and the protection of human capacity for independent judgment.
Summary
The race toward artificial intelligence has reached a point where the boundaries between progress and self-destruction are becoming disturbingly fluid. In the pursuit of algorithmic salvation, are we losing sight of what makes us human—the capacity for reflection, dialogue, and shared responsibility? Perhaps it is time to ask whether the future we are so feverishly programming is truly a future in which we want to live.
📄 Full analysis available in PDF