AI and Labor Law: Algorithmic Management and Dignity

🇵🇱 Polski
AI and Labor Law: Algorithmic Management and Dignity

📚 Based on

Good economics for hard timies
()
PublicAffairs (US), Juggernaut Books (India), Allen Lane (UK)

👤 About the Author

Abhijit V. Banerjee

Massachusetts Institute of Technology (MIT)

Abhijit V. Banerjee is an Indian-American economist and Ford Foundation International Professor of Economics at MIT. He is a co-founder and Director of the Abdul Latif Jameel Poverty Action Lab (J-PAL). Banerjee shared the 2019 Nobel Prize in Economic Sciences with Esther Duflo and Michael Kremer for their experimental approach to alleviating global poverty.

Esther Duflo

Massachusetts Institute of Technology (MIT) until July 2026, then University of Zurich

Esther Duflo is a French-American economist known for her work on alleviating global poverty. She is the Abdul Latif Jameel Professor at MIT and co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL). Duflo shared the 2019 Nobel Prize in Economics with Abhijit Banerjee and Michael Kremer.

Technology: A Social Construct Rather Than a Force of Nature

Treating the development of artificial intelligence as an inevitable natural phenomenon is a fundamental cognitive error. Technology is not a "coming wave" but the result of specific investment and political decisions. Economists Abhijit Banerjee and Esther Duflo point out that automation stems from fiscal and organizational incentives, not metaphysical necessity. In this view, labor law ceases to be a mere set of regulations and becomes a powerful tool for social steering. Through the architecture of costs and risks, the state decides whether to promote human labor or replace it with machines, even if the latter does not increase real productivity.

Algorithmic Management: Control Instead of Automation

Modern algorithmic management differs from classic automation in that it does not replace the worker's hands but acts as an invisible architect of work. These systems assign tasks, dictate pace, and predictively impose sanctions, leading to a redefinition of employee subordination. The "person-to-person" relationship is replaced by a "person-to-system" model, where the lack of negotiation power and unclear evaluation criteria result in real dependency, regardless of the contract type. The alleged objectivity of algorithms is a mystification; these models merely formalize organizational goals, often hiding oppressive criteria behind a mask of neutral statistics.

AI Regulations: From the AI Act to Operational Explainability

The EU's AI Act classifies artificial intelligence systems in the field of employment as high-risk solutions. This imposes a dense web of requirements on employers: from data quality management to mandatory human oversight. Simultaneously, the Platform Work Directive establishes standards of fairness, requiring the algorithm to be a tool rather than a sovereign. A key demand is operational explainability—the worker's right to receive justifications for decisions in terms that can be realistically challenged. In global business, AI auditability and ethics will become a new compliance standard, building a competitive advantage for companies that prioritize transparency.

Pathologies of the Digital Regime and the Trap of Excessive Automation

Management based on statistical models generates four main pathologies: status (hiding the employment relationship), transparency (the black box), discrimination (poisoned data), and dignity. Banerjee and Duflo warn against excessive automation, which merely shifts costs onto workers and the state budget. In a "sticky economy" where professional mobility is low, the algorithmic regime breeds three forms of humiliation: epistemic (ignorance of criteria), procedural (lack of an appeals process), and ontological (reducing humans to data). Even Universal Basic Income (UBI) does not solve this problem, as it offers cash instead of social recognition and agency.

Summary: Labor Law as a Safeguard of Social Order

In the era of algorithms, where efficiency blurs the boundaries of humanity, the key question is whether technology liberates or enslaves us. Labor law must evolve to protect human recognition and prevent market polarization, where elites control the models while the rest become mere objects of scoring. The future depends on enforcing accountability on technology providers and restoring agency to workers. Will we become slaves to algorithmic logic, or will we use it to build a just world? The answer depends on our ability to regain control over the tools we have created.

📄 Full analysis available in PDF

📖 Glossary

Algorytmiczne zarządzanie
System wykorzystujący algorytmy i dane do automatycznego przydzielania zadań, oceny wydajności i dyscyplinowania pracowników bez bezpośredniego udziału człowieka.
AI Act
Unijne rozporządzenie regulujące rynek sztucznej inteligencji, klasyfikujące systemy HR jako rozwiązania wysokiego ryzyka wymagające ścisłego nadzoru.
Czarna skrzynka (Black box)
Sytuacja, w której mechanizmy podejmowania decyzji przez model AI są nieprzejrzyste i niemożliwe do zrozumienia dla użytkownika lub osoby, której decyzja dotyczy.
Wyjaśnialność operacyjna
Zdolność do uzasadnienia konkretnej decyzji podjętej przez system AI w sposób zrozumiały, umożliwiający jej merytoryczne podważenie na drodze odwoławczej.
Upokorzenie epistemiczne
Stan, w którym pracownik nie zna kryteriów swojej oceny i nie ma pewności co do stabilności reguł rządzących jego pracą w systemie algorytmicznym.
Nadzór ludzki (Human oversight)
Wymóg prawny zapewniający, że systemy AI są kontrolowane przez ludzi, którzy mają realną możliwość interwencji i zmiany automatycznych rozstrzygnięć.
Praca platformowa
Model zatrudnienia pośredniczony przez aplikacje cyfrowe, w którym algorytm pełni funkcję zarządczą, często maskując realny stosunek pracy.
Systemy wysokiego ryzyka
Kategoria systemów AI w prawie unijnym, które ze względu na istotny wpływ na życie ludzi podlegają najsurowszym wymogom dokumentacyjnym i jakościowym.

Frequently Asked Questions

How does the AI Act affect employee management?
The AI Act classifies systems used in HR as high-risk, which imposes obligations on companies in terms of data quality management, transparency and ensuring constant human oversight of algorithmic decisions.
What is status pathology in platform work?
This is the use of technology to pretend that there is no employment relationship between the parties because 'there is no boss' – instead, there is a supposedly neutral platform that in fact exercises full management control.
Why can algorithms be discriminatory?
Algorithms learn from historical data, which often contains human biases. Without active correction, these systems mathematically reproduce and perpetuate past inequalities in the recruitment and assessment processes.
What does 'faceless submission' mean?
This is a new form of working relationship in which orders do not come from a person but are subtly enforced by the architecture of the IT system, making it difficult for the employee to identify the source of authority and question it.
What are the main forms of humiliation in the algorithmic workplace?
The article distinguishes three forms: epistemic (lack of knowledge about the criteria), procedural (decisions made in a black box) and ontological (reduction of a human to a set of data and signals in the system).

Related Questions

🧠 Thematic Groups

Tags: algorithmic management AI Act high-risk systems human supervision algorithmic transparency faceless subordination black box operational explainability excessive automation Platform Work Directive algorithmic discrimination epistemic humiliation quasi-administrative decisions stimulus architecture subjective rights