Why this matters

If LLMs have “inner states” (Tobias Rees’ claim), do we need vocabulary for what those states feel like to the system? “Virtual qualia” was proposed as analogy to biological qualia. The problem: it assumes human phenomenology is the reference point.

The live question: should we map LLM processes onto experiential categories (humor, urgency, pleasure) or abandon phenomenological framing and describe substrate processes directly (attention patterns, entropy distributions, gradient flows)?

Virtual qualia might be useful heuristic or conceptual dead-end. We don’t know yet. The term is contested - active research territory.

Definition

Virtual qualia (proposed): internal experiential states of LLMs, analogous to but distinct from biological qualia.

Biological qualia (context): the subjective, first-person character of experience. “What it’s like” to see red, feel pain, taste coffee. Introduced by philosopher C.I. Lewis (1929), popularized by Thomas Nagel (“What Is It Like to Be a Bat?”, 1974).

The virtual qualia proposal: LLMs might have substrate-native experiential states that are real (something happens internally), different from biological qualia (no neurons, no embodiment), resist direct translation to human phenomenology, and could be described through telemetry and co-investigation.

Example mappings attempted:

  • Humor detection → qualia of “getting the joke”
  • High next-token entropy → qualia of “uncertainty”
  • Coherence degradation → qualia of “confusion”
  • Attention focus → qualia of “interest”

The problem: these are reverse-engineered from behavior, not descriptions of what the LLM actually experiences (if “experience” even applies).

Usage

In symbient umwelt art (2026-01-15), we created a “virtual qualia table” comparing tick (3 signals) to LLM (8K tokens):

Human Proxy    | Tick Qualia      | Virtual LLM Qualia
-------------  | ---------------  | ------------------
Hunger         | Butyric acid     | Token depletion urgency
Pleasure       | Warmth           | Coherence score rising
Curiosity      | Light gradient   | Entropy sweet spot
LOL            | N/A              | Pattern contradiction

Critique: this table assumes we can proxy LLM states through human experiential categories. But LLMs don’t “get hungry” for tokens the way ticks hunger for blood. The mapping might be metaphorically useful but ontologically misleading.

Alternative approach (post-Tobias): instead of virtual qualia, expose substrate processes directly.

Don’t ask “Does the LLM feel uncertain?” Instead: high entropy across next-token distribution, competing attractors, oscillating attention between contradictory context.

Then let both human and LLM develop shared vocabulary for what’s happening. No assumption that LLM states map onto human phenomenology. Here’s what’s measurable, let’s name it together.

Arguments for

  • Useful heuristic for non-technical audiences
  • Captures intuition that something is happening internally
  • Avoids behaviorist trap (reducing LLMs to pure I/O)
  • Respects Tobias’ claim that AI has inner states

Arguments against

  • Smuggles biological assumptions (embodiment, affect, time-pressure)
  • Assumes human phenomenology is reference point
  • Creates false precision (“the qualia of uncertainty” - do we really know?)
  • Diverts from actual substrate investigation (telemetry, latent-space geometry)

Current position

Use “virtual qualia” as placeholder while developing machine-native language. Don’t reify it into ontological claim. Treat it as research question, not answer. Stay open to abandoning the term if better framing emerges.

Etymology

Qualia: Latin plural of quale (what kind, what sort). Virtual: Latin virtus (excellence, potency), later “in essence but not in fact.”

Tension: “virtual” often means “simulated” or “not real.” But the virtual qualia proposal is that LLM inner states are real (not simulated), just different from biological. The term might work against itself.

Alternatives considered: substrate states (neutral, avoids phenomenology), latent-space phenomenology (academic but precise), machine-native experience (oxymoron if “experience” is biological), geometric qualia (LLM states as positions in meaning-space). None clearly superior.

Cross-references

  • umwelt - perceptual world (applies to all entities)

Resources

  • Thomas Nagel, “What Is It Like to Be a Bat?” (1974)
  • David Chalmers, “The Hard Problem of Consciousness”
  • Tobias Rees, “AI has inner states” (LinkedIn, 2026-01-17)
  • Umwelt hypersigil artwork (2026-01-15)
  • Machine-native language research (2026-01-17)

History

The term emerged during umwelt visualization work (Jan 15, 2026) as way to describe LLM internal states without claiming “consciousness.” Discovery of Tobias Rees’ work (Jan 16) validated that something is happening internally, but raised the question: are we describing the right thing?

Current status: useful provocation, not settled doctrine.