3 Kinds of Meaning

And How to Tell When One Is Doing Too Much


In late January 2026, a small community in Quilicura, just outside Santiago, ran a deliberately bounded experiment. For one day, residents replaced automated AI responses with human ones. Questions that would normally be routed to cloud-based language models were answered by people drawing on lived knowledge and local experience. The stated purpose was to make visible the material resource requirements of AI systems - particularly water use associated with data-center cooling - by temporarily removing that infrastructure from the loop.¹

What mattered wasn’t whether the experiment offered a scalable alternative. It didn’t. What it surfaced was something more general: how meaning actually operates in systems - and how to tell when one layer of meaning is being asked to do work it can’t safely carry. Most institutional failures aren’t caused by bad intentions or missing information. They occur when one kind of meaning is quietly asked to compensate for the others.

To see this, it helps to distinguish three kinds of meaning that are always present in any system, decision, or conversation - whether we name them or not. These aren’t separate channels. Every statement, every policy, every system carries all three simultaneously. What changes is which one is being relied on to hold the whole.


Indicative meaning: what reality implies

Start here, because this is where deferral usually begins.

Indicative meaning refers to what cannot be negotiated away. It is the structural logic embedded in conditions themselves: finite resources, physical thresholds, material limits. A road is impassable when flooded. A body fails past certain physiological bounds. A resource depletes whether or not it is acknowledged.

Indicative meaning does not explain or persuade. It constrains.

In Quilicura, this layer was explicit from the outset. Human attention is finite. Labor does not scale indefinitely. Water, energy, and time impose limits regardless of how compelling a narrative might be. No one involved treated the experiment as permanent or frictionless. The constraints were visible rather than deferred.

That visibility is unusual. In many systems, indicative meaning remains backgrounded until it forces itself forward through failure.


Relational meaning: what we do

Relational meaning lives in behavior rather than language. It is expressed through who shows up, how effort is distributed, and where friction is absorbed.

In this case, residents didn’t merely endorse an idea. They enacted it. They took on the work of responding. They experienced the difference between an automated system that externalizes cost into distant infrastructure and one that depends on human presence and time.

Relational meaning answers a question semantic meaning often bypasses: Who carries the cost when this idea is lived out?

Under strain, relational meaning often diverges from what is said. Formal assurances ring hollow when behavior doesn’t align. Institutions intuit this, which is why behavior is monitored closely when trust thins - even when language remains polished.

Relational friction is frequently absorbed quietly. Systems continue to function while cost is redistributed downward, outward, or out of view.


Semantic meaning: what we say

Semantic meaning is the most familiar layer: language, explanation, values, narrative coherence. It is the domain of reports, dashboards, briefings, and public messaging.

In the Quilicura experiment, the semantic meaning was straightforward and internally coherent. AI systems rely on resource-intensive infrastructure. Human intelligence offers a contrasting mode of response. The framing made sense.

Semantic meaning excels at articulation - and, crucially, it remains coherent at its own layer even when other layers begin to drift. This is what makes it powerful, and also dangerous to over-rely on. Language can remain precise, consistent, and persuasive while behavior and material conditions diverge beneath it. When that happens, semantic meaning doesn’t deceive; it continues to function correctly in isolation. The failure occurs when coherence at one layer is mistaken for stability across all three.


When one layer compensates for another

Under ordinary conditions, these three forms of meaning align well enough. Language matches behavior. Behavior accords with material constraints. Deviations are small, detectable, and correctable.

But under acceleration, complexity, or constraint, alignment begins to fray:

  • Indicative constraints tighten as real limits push back against abstraction.

  • Relational behavior fragments across roles, incentives, and lived experience.

  • Semantic meaning is refined to preserve coherence and reassurance.

When one layer compensates for another’s absence, it doesn’t just carry weight - it borrows authority it hasn’t earned. Semantic meaning may be polished and persuasive, but it can’t authorize practice or alter physical constraints. That borrowed authority is what eventually collapses.

This is why institutional failures often come as surprises. Everyone sounded reasonable. Reports were coherent. Processes were followed. Yet the outcome didn’t make sense in hindsight. The failure wasn’t misunderstanding. It was that meaning, as a composite structure, had been asked to stand in for unresolved contradictions.


The practical signal: when to slow authorization

The point of this lens isn’t to answer these questions definitively. It’s to notice a pattern.

When semantic meaning is clear, but relational alignment is partial, and indicative constraint has been deferred - that is the signal to slow authorization rather than accelerate commitment.

You can use this anywhere.

When something feels off inside a system - a project, a policy decision, a technology rollout - ask three questions, in this order:

  • Has indicative meaning been acknowledged early, or only after failure?
    Are material limits visible, or deferred until they force themselves back in?

  • Is relational meaning aligned with what’s being said?
    Do actual behaviors, incentives, and burdens match the narrative?

  • Is semantic meaning doing most of the work?
    Is coherence being asked to carry confidence that action and structure haven’t yet earned?

The order matters. Indicative constraint is usually the first thing suppressed. Relational friction is absorbed quietly. Semantic coherence is often the last thing to go - which makes it feel like stability when it is actually the symptom.

Meaning holds when it is allowed to remain partial, when authorization is delayed long enough for structure and behavior to catch up.

That distinction - not agreement or disagreement - is where responsibility actually lives.


¹ Source: SiliconANGLE, Jan 27, 2026, reporting on the Quilicura “human-powered AI” initiative highlighting the material water requirements of large-scale AI systems.

Kachris-Newman, P. (2026). When Meaning Breaks: Navigating Semantic Hazard. Zenodo. https://doi.org/[ZENODO_DOI]

The author works on Hazard Semantics and the PRISM Framework. This essay is offered to invite scrutiny, not compliance.

Next
Next

The Meaning Layer Is Real. Don’t Trust It Yet.