← Blog

Critical Theory and Artificial Intelligence: A Reassessment of Innerarity

A reassessment of Una teoría crítica de la inteligencia artificial by Daniel Innerarity

Abstract

This paper offers a systematic and methodologically rigorous critique of Daniel Innerarity's Una teoría crítica de la inteligencia artificial. It argues that, despite its normative ambition and rhetorical sophistication, the work fails to meet the epistemological requirements necessary for informing public governance.

The critique focuses on three structural weaknesses: the absence of falsifiable claims, the substitution of conceptual abstraction for empirical evaluation, and the reification of "artificial intelligence" as a unified and quasi-agentive phenomenon.

The central conclusion is that the primary weakness of Innerarity's theory is not ideological but methodological: a critique that refuses empirical exposure cannot function as a basis for governance and risks replacing accountable evaluation with permanent moral suspicion.

1. Introduction: Defining the Object of Critique

This paper is not a general critique of artificial intelligence, nor a contribution to AI ethics, nor a policy proposal regarding digital technologies. It is a focused critical examination of a specific theoretical work: Una teoría crítica de la inteligencia artificial by Daniel Innerarity.

The relevance of this critique lies in the book's increasing circulation beyond philosophical debate and into institutional and policy-oriented contexts. When a theoretical framework is invoked to orient public decision-making, its methodological assumptions acquire practical significance. A theory that cannot be evaluated, corrected, or revised in light of evidence cannot responsibly inform governance.

The central claim advanced here is that Innerarity's approach substitutes conceptual density for empirical accountability, producing a form of critique that is rhetorically persuasive but operationally inert.

2. Critique Without Falsifiability

A defining feature of Innerarity's analysis is the formulation of broad claims about artificial intelligence and democracy that are resistant to empirical testing. Assertions concerning the transformation of political rationality, decision-making, or governance under AI are presented without operational definitions, measurable indicators, or counterfactual scenarios.

From a methodological perspective, such claims are non-falsifiable. There is no conceivable empirical observation that could refute them. As argued by Karl Popper, a claim that cannot, even in principle, be shown to be false cannot function as a scientific or policy-relevant hypothesis.

This does not render the claims philosophically meaningless, but it does render them unsuitable for guiding institutional action. Public governance requires frameworks that expose themselves to the risk of error.

3. Conceptual Reification and the Myth of "Artificial Intelligence"

Innerarity's theory consistently treats artificial intelligence as a coherent and unified phenomenon with systemic effects across social and political domains. This reification collapses heterogeneous systems — statistical classifiers, recommender systems, language models, decision-support tools — into a single analytical object.

The consequence is a displacement of responsibility. When "AI" is treated as an agent, attention is diverted from the actual sources of power: institutional design, procurement decisions, regulatory choices, and governance structures. Critique becomes diffuse, and accountability evaporates.

In policy contexts, abstraction at this level does not clarify problems; it obscures them.

4. Philosophical Foundations: Foucauldian Influence and Its Limits

Innerarity's framework draws heavily on the intellectual legacy of Michel Foucault, particularly in its treatment of power as diffuse, relational, and embedded in systems of knowledge. This tradition has proven valuable for historical and genealogical analysis, revealing how institutions and norms emerge over time.

However, the very features that make Foucauldian analysis insightful in retrospective critique become liabilities when applied to governance. Power conceived as omnipresent and non-localizable resists attribution. If control is everywhere, no actor is decisively responsible. If domination is structural and inescapable, no intervention can be evaluated as corrective.

In Innerarity's application of this framework to artificial intelligence, critique explains everything after the fact while guiding nothing before the decision.

5. What a Falsifiable Policy Claim Would Look Like

The methodological gap becomes clear when non-falsifiable critique is contrasted with claims that would meet institutional standards. A statement such as "artificial intelligence intensifies regimes of control and surveillance" cannot be empirically disproven.

By contrast, the following claims are falsifiable and policy-usable:

  • The introduction of AI-assisted administrative processing reduces average response times by a specified percentage without increasing error rates beyond a defined threshold.
  • Automated language support systems increase effective access to public services in minority languages relative to human-only baselines.
  • The absence of human override mechanisms in automated decision systems correlates with a measurable increase in exclusion errors.

These claims specify variables, allow measurement, and enable correction. Innerarity's framework does not generate claims of this kind.

6. Institutional Consequences of Non-Empirical Critique

When non-falsifiable critique is adopted as an implicit governance framework, predictable institutional effects follow:

  • Decision paralysis framed as ethical caution
  • Absence of benchmarks for success or failure
  • Resistance to learning and revision
  • Diffusion of responsibility under abstract categories

These outcomes are not accidental; they are the logical consequences of a critique designed to resist verification.

7. Applied Illustration: Multilingual Public Administration

Multilingual public administration provides a revealing applied context. Language policy necessarily involves trade-offs between coverage, cost, speed, and accuracy. These trade-offs are measurable.

Abstract claims that artificial intelligence threatens linguistic diversity offer no guidance unless translated into indicators: access rates, error distributions, delays, and comparative baselines. Without such measures, critique becomes symbolic rather than protective.

The issue is not whether risks exist, but whether they can be identified, measured, and mitigated. A framework that cannot answer these questions cannot support effective language policy.

8. Conclusion

Una teoría crítica de la inteligencia artificial does not fail because it is normative, nor because it raises ethical concerns, nor because it adopts a critical stance toward technology. It fails because it refuses empirical exposure. In doing so, it produces a form of critique that is institutionally attractive precisely because it demands no verification and entails no responsibility.

For public governance, critique must accept the risk of being wrong. A theory that cannot be falsified cannot be corrected. A critique that cannot be corrected cannot govern.