AI vs Human Judgment: Built on a Shrinking Foundation
The constraint was never the data itself. It was who could read it.
Everything we know is encoded in data. Every law, every formula, every medical procedure, every bridge specification. The accumulated knowledge of human civilization sits in data. It always has.
The constraint was never the data itself. It was who could read it.
Six Centuries of the Same Problem
In 1820, only about one in ten humans could read. The other nine had no direct access to their species’ accumulated knowledge. What they could learn was limited to what someone nearby could tell them, show them, or demonstrate through physical labor.
The printing press changed that in the 1450s. Before Gutenberg, books were copied by hand, primarily by the clergy. Knowledge was reserved for elites. When printing made books affordable, the cost of reproducing knowledge collapsed. Literacy rates across Europe rose dramatically over the following centuries. Ideas reached artisans, merchants, and commoners for the first time.
The Renaissance did not happen because humans suddenly became more intelligent. Its later acceleration, the period that reshaped European science, culture, and institutions, happened when the printing press put knowledge into more hands than ever before.
This pattern repeated. The Scientific Revolution. The Enlightenment. The Industrial Revolution. Each was shaped, in part, by an expansion of who could access and process data.
Harvard professor Jeanne Chall documented the cost of this access. Her 1983 “Stages of Reading Development” identifies six stages spanning from birth to age 18 and beyond. Most children begin decoding at age 6 or 7. Reading to learn new content develops through age 14. Full reading proficiency, the ability to hold multiple viewpoints and synthesize across sources, requires roughly 18 years of development.
That is the human bottleneck. Anyone who wants to access the knowledge humanity has produced must spend over a decade learning to decode it. And even after all that institutional infrastructure (schools, libraries, universities, teacher training, curriculum development), roughly 12% of the world’s adult population still cannot read.
Today, roughly 5 billion people can read. Two centuries ago, that number was fewer than 100 million. The infrastructure investment was enormous. The results compounded for centuries.
But the bottleneck was never eliminated. It was only widened.
AI Has No Bottleneck
AI systems do not need 12 to 18 years. They do not need schools. They do not need the institutional infrastructure that humanity built over six centuries to widen literacy access.
An AI system can process the equivalent of a human lifetime of reading in hours. Not because it understands the way humans do. But because the structural constraint that limited humanity for millennia, the time and infrastructure required to learn to decode data, does not apply.
Sundar Pichai called AI “more profound than fire or electricity.” Andrew Ng compared it to electricity’s transformation of industry. These are not small claims. But the comparisons, while directionally correct, miss the structural point.
Fire and electricity changed what humans could do. The printing press changed who could access what humans already knew. AI changes something different. It removes the access constraint entirely.
The entity that can read all the data does not need to be taught to read. It does not need to be literate. It simply processes.
What I Have Seen Building Data Systems
I have spent 20+ years building data platforms, the infrastructure where institutional knowledge is stored, governed, and made accessible. Across telecommunications, digital health, media, and deep-tech imaging. Fourteen platforms. Regulatory compliance across GDPR, ISO 27001, ISO 13485.
In every one of those systems, the hardest problem was never storage. It was never compute. It was always access. Who can see this data. Who can understand it. Who has the context to make it actionable. Who can read the schema, the lineage documentation, the compliance requirements.
The answer was always: fewer people than we needed.
Every data platform I have built was fundamentally a literacy infrastructure. A system designed to make data readable to the humans who needed to make decisions from it. Dashboards, documentation, data catalogs, access controls, training sessions. All of it existed because the humans in the loop needed help reading.
AI does not need dashboards. It does not need training sessions on how to read a schema. It does not need a data catalog to discover what exists.
This is not a prediction. This is what I am already seeing in production. The systems I build are increasingly consumed by AI agents that can parse, correlate, and act on data that would take a human team days to process.
What We Did Not Do
We did not assume this meant humans were no longer needed. That would have been the dramatic conclusion. It would also have been wrong. But the comfortable conclusion, that humans simply “move up” to judgment, may be equally wrong.
We kept humans at the validation layer and moved AI into everything below it. That was a deliberate scope decision, not a prediction about where the boundary would stay.
For six centuries, the constraint was literacy: getting humans to the point where they could read the data. Now the constraint is judgment: determining what to do when the data has been read.
Today, AI cannot decide what trade-off is acceptable when two regulatory frameworks conflict. It cannot determine which system to treat as authoritative when three data sources disagree and none can be paused. It cannot stand behind a decision in an audit room.
But “today” is doing a lot of work in that paragraph.
The Atrophy Problem
Judgment is not a standalone capability. It is the top of a stack. That stack looks like this: literacy, domain knowledge, contextual reasoning, and judgment. Remove the bottom, and the top has nothing to stand on.
When GPS became standard, spatial navigation skills degraded, a finding confirmed by peer-reviewed research in Nature Scientific Reports. Studies on calculator use show a similar pattern, though the evidence there is more debated. The mechanism is the same either way. The tool that removes the need for a skill also removes the pressure to develop that skill.
If AI reads everything for us, who spends 18 years learning to read deeply? If AI summarizes every compliance document, who develops the capacity to spot what the summary left out? If AI shows correlations across data sources, who builds the intuition to know when those correlations are misleading?
The optimistic version assumes a clean handoff: AI handles reading, humans handle judgment. But judgment grows out of years of reading. The executive who can make the right call in an audit room built that capability through decades of reading contracts, regulations, incident reports, and case law. Remove that developmental pathway, and you do not get a human who is “freed up for judgment.” You get a human who never developed it.
This is not hypothetical. I already see it on data teams. Junior engineers who rely on AI-generated code summaries struggle more with architectural decisions than their predecessors, who had to read the codebase line by line. The reading was not overhead. It was training.
The Retreating Boundary
There is a second problem, and it is harder to dismiss.
Every capability once called “uniquely human” follows a pattern. It is labeled as impossible for machines, then as difficult, and then solved.
Reading text. Image recognition. Translation. Logical reasoning. Each was, at some point, declared a durable human advantage. None of them held.
The claim that “judgment remains human” follows the same rhetorical pattern. It describes a current limitation, not a structural one. There is no identified mechanism that makes trade-off evaluation, contextual reasoning, or decision-making under ambiguity permanently inaccessible to AI. We simply have not seen it done well yet.
That is a statement about timing, not about boundaries.
In the systems I build, AI is already making low-stakes trade-off decisions. Which data quality rule should be relaxed when ingestion is delayed? Which schema migration path minimizes downstream breakage? Which alert to escalate and which to suppress? These were human judgment calls two years ago. They are not anymore.
The boundary is not stable. It is retreating. And the retreat is accelerating.
What I Have Seen Stabilize (For Now)
The systems I build became faster to operate. Data that previously required a team of analysts to interpret could be pre-processed and surfaced with context already attached. The human role shifted from “read and interpret” to “validate and decide.”
That shift produced real gains. Smaller, more focused human involvement at the decision point. Broader, faster machine involvement at the data processing layer. The platforms became more operable.
But I am not confident this equilibrium is stable. It works today because AI judgment is narrow and brittle in high-stakes contexts. That is an observation about 2026, not a law of nature.
The Structural Observation
Every great leap in human progress was tied to expanding who could access data. The printing press. Universal education. The internet. Each removed a gatekeeping layer.
AI is the first entity that does not need the gate opened. It was never behind the gate. It processes data without the 12-to-18-year investment that every human requires.
Economic historian Joel Mokyr used the concept of a “phase transition” to describe the Industrial Revolution, the moment when episodic innovation clusters became self-sustaining growth. Each major access expansion (the printing press, universal education, the railroad) contributed to the conditions that made that transition possible.
The difference this time is structural. Previous transitions expanded human access. This one introduces a non-human entity that already has access. And unlike previous transitions, this one may erode the very capabilities it claims to free humans to use.
The next leap will come. Not because AI is smarter than humans. But because it has no reading bottleneck. The entity that can process all available data will generate insights, connections, and outputs that the reading-constrained entity cannot.
The constraint was always literacy. AI has none.
What remains is judgment. But judgment is not a fortress. It is a skill built on the same literacy foundation that AI is making optional. If we stop building the foundation because the machine handles it, we lose the thing we claimed made us irreplaceable.
The honest position is not “judgment is human.” The honest position is: we do not know how long judgment remains human, and we are already undermining the pipeline that produces it.
Data platforms that survive growth, stress, and reality. - Can Artuc - can [at] dataprincipal.io

