i’m researching spatial intelligence. both humans and machine learning models interpret 3D space differently. human spatial expectations recalibrate to the spatial patterns we repeatedly encounter — including synthetic ones. AI will change us and we will change it, which isn’t necessarily bad. it’s just different.
exposure → familiarity → expectation shift
Inferential Normalization & Hybrid Continuity:
How Modalities & Constraints Shape Spatial Representation
how spatial structure gets decided across modalities, constraints, and human steering

representation revolves around inferential normalization and world-model shaping: a system produces a coherent-enough world from partial cues, and repeated exposure makes that inferred world feel like the baseline for what’s real.
in generative media, that world-model is built from explicit modalities and constraints; different modality bundles and coordination produce different coherence signatures, so outputs can look coherent while failing to preserve continuity, identity, scale, or geometric stability the way ecological perception would. the key variable is not the tool; it’s whether structure is being specified or resolved at that step.
my core research question is how each modality supplies different spatial cues and constraint conditions, how those constraint sets interact, and how model architecture (alignment vs separation) determines what spatial structure can stay consistent—especially when human steering functions as an added modality during generation.
method: input modality → constraints → constraint interaction → alignment/separation → architecture-shaped structure → representation → human interpretation.
humans rely on spatial structure to reduce interpretive effort and support action.
teaching people how to reason about inferred worlds reframes imperfections as intelligible outcomes rather than failures, enabling communication, adaptation, and intentional intervention in hybrid human–AI systems.
deliverable: a repeatable casework framework for inferential normalization and world-model shaping that links a system’s modality bundle and coordination to the coherent-enough spatial hypothesis it constructs—and to the interpretations humans reliably take from it in hybrid environments. it includes a documentation workflow (inputs → human interventions → breakpoints → constraint dynamics mapped) that makes hybrid continuity observable, comparable, and archivable over time.
track A: specified-heavy workflows (explicit 3D authoring + ai modifiers)
track B: resolved-heavy workflows (generative worldbuilding)

operationalize framework in real tool sessions
run recorded workflows, capture decision points, and map each event while refining the documentation protocol.

characterize each platform as a coordination profile
produce tool profiles that specify available modalities, dominant constraint biases, intervention levers, and recurrent coherence/failure signatures (lens: specified ↔ resolved).

execute comparative modality experiments across tools
run controlled probe families that vary modality bundles, prompt completeness, and coordination conditions to identify cross-tool regularities and divergences in inferred spatial structure.

synthesize into an empirical account of inferential normalization
connect observed coherence signatures to human interpretation and expectation shift, then formalize the archive and chapters that demonstrate hybrid continuity as evidence-backed.