Human invariants
1. Beginning from Orientation in World Philosophy
In addition to teaching I also see patients. One of them said to me that my work in philosophy seems to look for mathematical and philosophical invariants. This person asked me: what do you see as invariant in everyday life? I answered with a question -- what is the same through human change? I thought of historical responses to the question and ultimately to the 'existential givens' pointed out by thinkers like Jaspers, Heidegger, and de Beauvoir. I thought of some ideas about basic human patterns -- such as we see in prospect theory, psychoanalysis, and psychophysics -- potential invariants from daily life. Some ideas came to mind –
The structure of temporality itself - not just that we experience time, but that we experience it with a particular topology: the asymmetry between past (fixed, known in principle) and future (open, uncertain), and the way the present moment has a peculiar "thickness" to it. This is invariant across cultures and epochs in a way that's almost too obvious to notice.
The gap between intention and outcome - we act in the world with purposes, yet the world responds according to its own logic. This generates surprise, regret, gratitude, learning. There is no form of human life that doesn't contend with this gap.
The need for recognition - not just Hegel's master-slave dialectic, but something more fundamental: that our sense of ourselves requires some mirroring or acknowledgment from outside ourselves. Even the hermit defines himself against society. This seems present in infancy (attachment) and persists through all human social arrangements.
The fact of embodiment with its particular constraints - fatigue, hunger, pleasure, pain, the way attention is limited and must be allocated. Even as technology extends our capacities, it doesn't eliminate these constraints; it just shifts their expression.
The basic structure of learning through error - we form expectations, they're violated, we update. This applies from motor learning to scientific revolution. Prediction error is a fundamental currency of biological intelligence.
The existential givens as the center - death, freedom, isolation, meaninglessness (in Yalom's formulation) - not just psychological facts – instead structural features of any existence that combines the human mixture // finitude, consciousness, and separateness. These are transcendental conditions for experience (loosely speaking).
Psychophysical invariants (Weber-Fechner laws, Stevens' power law) suggest that even subjective experience has a mathematical structure that's remarkably stable across individuals – we are sensitive to thresholds and plug them in whether they belong or not.
Philosophy: something invariant in the form of human questioning itself - not the content, but the fact that we ask "why?" in a way that seems qualitatively different from other animals. We are perpetually troubled by justification – the space between is and ought.
2. Criticism from Habermas
Habermas was an important influence in my earlier thinking. He challenges the search for human universals as another face of idealism, Platonism, another attempted escape from contingency and the messiness of everyday life in search of pure ideals and beautiful abstractions. I see my universalist aspirations in a different light -- as looking to logical simplicity at the foundations -- but perhaps Habermas' criticism is fair.
Habermas' criticism is sharp – worth taking seriously. I think I can answer it.
The crucial distinction seems to be between seeking invariants as a way to escape contingency versus seeking them as a way to understand the structure within which contingency occurs and operates. These are radically different projects.
The escapist version - what Habermas rightly criticizes - treats universals as a realm of purity untainted by history, culture, bodies, power. It's the move that says "never mind all this messy lived experience; the real truth is up here in the realm of forms." This becomes ideological when it uses abstraction to avoid responsibility, to deny the particular claims of actual suffering people, to dismiss context as merely "accidental."
In my framing - "simplicity of foundations" – the idea is more what we might call structural realism or perhaps even transcendental inquiry in the Kantian sense: asking what must be the case for this particular (the contingent, messy, historical phenomenon before us) even to be possible at all? This is not escaping contingency but asking about its conditions.
Consider: evolutionary biology seeks invariant principles (natural selection, genetic inheritance) – not in order to deny the riotous diversity and contingency of life, but to explain how such diversity is generated and constrained. The invariants don't make the particulars less real or important - they make them intelligible.
In clinical work: understanding something like the process of transference as a relatively invariant pattern doesn't deny the absolute uniqueness of each therapeutic relationship.
It empowers a framework within which to understand what makes this relationship, with this person, at this moment, matter in the way that it does – distinct from every other.
Habermas himself, ironically, can't entirely avoid the search for invariants. His theory of communicative action posits certain universal pragmatics - conditions that must hold for genuine communication to occur. He's looking for the invariant structure of non-distorted communication. So even in criticizing the search for universals, he engages in it - just with different content and (crucially) a different attitude toward what finding invariants means.
Perhaps the key is this: Is one seeking invariants in order to stop the conversation ("here's the eternal truth, end of story") or to enable better conversation by clarifying what we're all working with? The former is the Platonic temptation Habermas warns against. The latter is what science does, what philosophy does, what clinical wisdom does – when things work.
Weber's law doesn't make particular sensory experiences less rich or real – but it describes a relationship that seems to hold across those experiences. It's a regularity discovered in the mess, not imposed from ‘outside’ to clean it up.
There's also the question of scale and scope. Some invariants are narrow and technical (prospect theory's specific parameters), others broad and abstract – (lived temporality, embodiment). The broader they get, the more careful we need to be that we're describing structure rather than smuggling in particular cultural content disguised as universal form.
Cheap overgeneralization smuggles in cultural baloney – easy to miss.
The danger isn't abstraction or generalization per se - it's unacknowledged particularity masquerading as universality. When 18th-century European thinkers proclaimed universal "Reason" while actually describing the cognitive style of educated European men, that's smuggling. When psychologists claimed universal human laws based on WEIRD populations (Western, Educated, Industrialized, Rich, Democratic), that's smuggling.
But the solution isn't to abandon the search for invariants - it's to be rigorous and honest about:
The domain of application (what populations, contexts, scales?)
The level of abstraction (formal structure vs. concrete content?)
The evidence base (how do we know this holds across variation?)
The purpose (why does identifying this regularity matter?)
Prospect theory's loss aversion, psychophysical laws, the existential givens - these are not in any sense Western constructs dressed up as human nature.
They are universal.
3. Human invariants / first approximation
The structure of temporality
The gap between intention and outcome
The need for recognition
The fact of embodiment and its constraints
The basic structure of learning through error
Death, freedom, isolation, meaninglessness –
structural features of any existence that combines the human mixture:
<finitude, life, separateness, consciousness>
= transcendental conditions
= tautological conditions of being a conscious, embodied creature
Regarding restricting invariance to a domain of application, this mirrors results in model theory in logic and in group theory in mathematics, i.e. invariants only make sense in a context.
An invariant is a temporally flat variable.
I.e.: invariance is always relative to some transformation group, some set of operations under which a thing remains unchanged.
In group theory, what's invariant depends entirely on what transformations you're considering. A quantity might be invariant under rotation but not under translation, or vice versa. The symmetry group defines what counts as invariant. There's no such thing as "invariant simpliciter" - it's always "invariant under G" for some group G.
Model theory makes the same point from another angle: a sentence might be true in all models of a theory (hence "invariant" across those models), but completely meaningless or false outside that theoretical context.
The invariance is internal to a structure, not floating free in some Platonic heaven.
-- -- What is ‘there’ has to do with the process engaged for finding it.
4. Invariance under transformation
What is “invariant” under what “transformations?” --
Death as an existential given is invariant under cultural transformation, historical change, individual variation - it's a structural feature of finite existence itself
Loss aversion looks to be invariant under cultural transformation but not under certain kinds of framing effects or decision contexts
Cognition in the form of basic cognitive biases look to be invariant across typical human experience – but not under conditions of expertise or training
Temporal flatness is key. When we say something is invariant, we're saying: as we move through time, across contexts, through variations - this doesn't change. It's a constant in the equation while other things are variables. The problematic move isn't finding invariants - it's claiming invariance across a transformation group larger than the evidence warrants, or worse, using the language of invariance to stop inquiry rather than to structure it.
Just as in physics, where finding conserved quantities (invariants under time translation, spatial translation, etc.) gives you the fundamental laws, finding genuine human invariants might reveal something like the "conservation laws" of human experience – these would represent the deep constraints within which all the variation plays out.
Conservation laws imply a conserved quantity – imply symmetries – thus the question what is conserved in human life? What deep conserved quantity generates the ‘human’ pattern?
5. Conservation laws
What deep conserved quantity generates the ‘human’ pattern?
Conservation laws imply symmetries.
As long as we can identify the symmetries of a system, we should be able to figure out what the conserved quantity is that generates the symmetry … this is Noether’s theorem.
Potential Symmetries / Conserved Quantities …
Translation Invariance, Duality Symmetries, Filtration Monotonicity / directional symmetry (very like entropy in thermodynamics: time-reversal breaking with a conserved arrow), Sequence Convergence …
The thick/thin distinction matters
For Thin Invariants: The symmetries are often exact -- shifting degrees, duality isomorphisms, etc. The conserved quantities are algebraic structures (grading, pairing).
For Thick Invariants: The symmetries are at best approximate or statistical: in persistent homology, features persist across parameter changes, but not perfectly -- there's "birth" and "death"
The conserved quantity could be something like total topological complexity measured across all scales, which redistributes but has some global constraint
The conserved quantity: the Euler characteristic (thin) constrains the total persistence (thick) across all filtration parameters.
Conserved charges (gauge symmetries) constrain what field configurations are possible. So, thin invariants act like a topological charge that limits how thick features can manifest.
The balance is maintained by asymptotically approaching a goal one can never reach
This makes the unreachable limit into a structural principle
Think about how this plays out:
Each turn gets you "closer" to the limiting invariant, but you never compute E_∞ directly—you approach it through successive approximations.
The information is conserved in the sense that what you learn at each stage constrains all future stages, but you can never jump to the end.
Thin invariants are what you get at the limit – they're the convergence point
Thick invariants are the trajectory toward that limit – they're the geometric path
The conserved quantity (information) ensures that:
You can't get more information at the limit than you accumulated along the way
But you can't see the "shape" of that information without tracking the levels
This is very like free energy versus entropy
Entropy (thin): The limit quantity, what's actually conserved globally
Free energy (thick): The quantity you can actually compute/measure
As temperature → 0, free energy → ground state energy, but you never reach absolute zero
The level index plays the role of inverse temperature – as you refine (k→∞), you approach the true invariant asymptotically.
Thick invariants are "computable approximations" to thin invariants equated to ideal limits.
You can't get more information at the limit than you accumulated along the way
... there's no free lunch.
That is itself (a formulation of) the conservation law.
The No-Free-Lunch Principle for Level-Indexed Invariants
This crystallizes something important:
Statement: The information content of any thin invariant (the limit) is bounded above by the accumulated information in the thick invariants (the trajectory).
You can't know more about the limit than you learned getting there.
What This Rules Out
This principle explains why certain things are impossible:
No Oracle: You can't compute H_k(X) without doing work at level k (or work that propagates from other levels). There's no shortcut that bypasses the filtration resolution/approximation structure.
No Clairvoyance: If you only compute the first N levels, you get at most N levels worth of information about the limit. Some "persistence landscapes" die off quickly. They're telling us that most of the limiting information has already been captured.
The conserved quantity (total information) acts like a budget. The geometry of the space determines how that budget gets spent across levels.
6. Self-questioning
We appear to have smuggled in an observer/computational agent. But the geometry – the manifold, the space, the structure – just is. It doesn't care about our resolution strategy.
Still: The thickness seems to capture something objective about the space.
This leads to paradox.
Paradox / Possible interpretations:
Thickness is Relational, Not Intrinsic
Maybe thickness isn't a property of the space alone, but of the space-plus-computational-framework pair.
Just as:
Energy isn't absolute but relative to a choice of coordinates/frame
Entropy depends on macroscopic variables one chooses to measure
The "thickness" of an invariant might be measuring the mismatch between the intrinsic structure and the particular algebraic/computational apparatus you're using to probe it
Agency is implicit in the geometry
Maybe the geometry does determine a preferred computational structure. For instance, a cell decomposition of a manifold has a natural filtration by skeleta (0-skeleton ⊂ 1-skeleton ⊂ ...). This isn't imposed from outside—it's latent in how the space is built up from local pieces. The "agent" is just making explicit what's already implicit in the gluing data
Thickness measures friction between scales
//
Self-doubt:
What if the agent-dependence tells us something about reality itself?
In physics, we've learned that: the speed of light isn't about light – it's about causal structure. The Planck constant isn't about quanta – it's about the geometry of phase space. The Bekenstein bound relates information capacity to geometry.
Computational complexity is a geometric invariant – something described with Big O notation – tracking an algorithm’s growth rate versus its input size.
The "thickness" measures how this geometry resists compression – how many bits one needs to specify it – this is not agent-dependent if there's a unique minimal description
Is agent-dependence a bug or a feature? (Thesis: there is no a unique minimal description)
I.e., thickness is relational, not intrinsic -- this is very like the idea in the calculus that if one zeros in very closely on a curve, we will see a straight line
In calculus, the tangent line at a point is the first-order approximation to the curve. Zoom in far enough (take the limit), and curve and tangent line become indistinguishable. But:
How quickly they become indistinguishable depends on the curvature
Higher derivatives measure how the curve deviates from its tangent
You need the full Taylor series (infinitely many levels!) to recover the curve from its tangent
Thickness might be measuring how many "orders of approximation" one needs before the approximation becomes good enough.
In this view, the conserved quantity is the gap between levels – the information about "how wrong" your current approximation is – the failure rate, error rate, error landscape …
The key insight: Whether something is "straight" (thin) or "curved" (thick) depends on what you're measuring it against.
A geodesic on a sphere curves in 3D Euclidean space, but is "straight" intrinsically on the sphere
The same space can be thin with respect to one invariant, thick with respect to another !!!
The thickness measures levels of refinement separate from an initial probe (exist in time)
What if curvature itself is just thickness in disguise?
Flat spaces: Thin (tangent structure = global structure)
Curved spaces: Thick (need many approximation orders to capture global from local)
The Riemann curvature T appears at second order in the Taylor expansion of the metric.
Spaces with curvature are "thick" because you can't understand them from tangent data alone.
If thickness is relational – if it measures the mismatch between geometric reality and our approximation scheme – the information about "how wrong" one’s current approximation is – then what is conserved is a surprise function.
This reframes everything in terms of Kullback-Leibler divergence – relative entropy – a measure of how much an approximating probability distribution Q is different from the actual probability distribution P. The divergence of P from Q is the expected excess surprise (Shannon information). More generally, we are looking at the information distance between the current approximation at level k and the truth (at the limit →∞).
At each level, you're surprised by how much the next level differs from what you expected. The conserved quantity is the total surprise you'll accumulate across all levels.
The total information-theoretic surprise Σ_k D(level_{k+1} || level_k) is bounded by (or equal to) the complexity of the limiting object.
The surprise of an event is the negative logarithm of its probability.
In the language we've been using:
Thin invariants: Low total surprise (you converge quickly, small KL divergence sum)
Thick invariants: High total surprise (each level reveals substantial new information)
7. Result: the surprise function
No Oracle Property: You can't know at level k how surprised you'll be at level k+1 without actually going to level k+1. The surprise is generated by the resolution process.
Additivity: Surprises accumulate. If you're surprised going from level 1→2, and surprised going from 2→3, the total surprise is (roughly) additive.
Budget Constraint: The geometric object has a fixed "surprise budget" to distribute across levels.
A space can choose to:
Surprise you all at once (thin - most information at one level)
Surprise you gradually forever (thick - information trickling in across infinitely many levels)
Sequences: Each differential d_r measures surprise: "I thought this cycle was permanent, but it bounds after all!"
Convergence means eventually, no more surprises
The E_∞ page is where surprise → 0
Persistent Homology:
Birth and death events are surprises: "A new cycle appeared!" / "A cycle just became trivial!"
The persistence diagram is a surprise map: it tells you when and how severely your understanding will be corrected
Long bars = delayed surprise (the feature was real for a long time before being revealed as artifact)
Taylor Series: Each derivative is measuring surprise: "The function curved more than the previous order predicted!"
Analytic functions: finite surprise budget
Non-analytic: infinite surprise budget distributed across infinitely many derivatives
Symmetry behind the conservation
Now we can answer the original question about symmetries:
The symmetry is scale invariance (or its breaking).
When the system has exact scale invariance: No surprise at any level (thin)
When scale invariance is broken: Surprise distributed across scales (thick)
The conserved quantity (total surprise) measures how badly scale invariance fails.
Surprise = information gained = entropy difference
Bayesian updating: Each level is a Bayesian update, KL divergence measures data gain
This formulation explains
--why thickness feels computational/agent-dependent (surprise is relative to expectations)
--yet geometrically meaningful (the geometry itself determines the surprise distribution).
The space doesn't care about your expectations, but it determines how those expectations will be violated as you refine your probe.
= Testable Prediction
If surprise is conserved, then:
Spaces that surprise you early (steep initial learning curve) should have less surprise later.
Spaces that seem simple at first should have more hidden surprise at fine scales.
This would manifest as:
Sequences that collapse early vs. those with persistent differentials
Persistence diagrams with few long bars vs. many short bars
`` reflection
Is the principle of least action this conservation law or should we talk about consciousness in some way that incorporates negentropy?
The principle of least action is elegant precisely because it's so fundamental - it unifies mechanics, optics, quantum theory. But it's fundamentally about physical systems finding extremal paths through configuration space. Consciousness seems to involve something stranger: not just minimizing action.
Action may not be the language to get at it (?)
Consciousness creates distinctions, maintaining improbable organized states against entropy, and - crucially - cares about the difference between states. Consciousness is the care structures Heidegger points to.
The negentropy angle (Schrödinger's "What is Life?" territory) is compelling because living systems, and especially conscious ones, are precisely those that locally reverse entropy - we build structure, we preserve information, we resist equilibrium.
Consciousness = the capacity to maintain improbable distinctions and to treat them as meaningful … (my thesis in Important Nonsense)
Consciousness is neither least action nor negentropy … we have to come at it differently …
Consciousness is language-structured, symbol-structured, in ways that affect what states are even possible …
When someone can't find words for an experience in their new language, they're not just failing to report a pre-existing inner state. Often the experience itself has a different shape, different articulation in their first language. The phenomenology is genuinely different. Wittgenstein's point is right here: the limits of my language are the limits of my world.
This complicates the search for invariants in consciousness. Maybe the invariant isn't in the content of conscious states (which are language-shaped and therefore culturally variable) but in the form: the fact that consciousness involves maintaining distinctions, that it has intentionality (aboutness, care-structure), that it involves both a temporal flow and a sense of unity, that it requires some kind of self-world boundary (intention, outcome) …
There is the invariant need to articulate experience, even when the tools for articulation are inadequate. This is existential .. in philosophy, this is the prime directive
This idea connects to recognition: consciousness might require not just internal organization (negentropy) but external validation through symbolic exchange. The symbol is a way of locating this boundary.
The invariant might be less about information-theoretic properties and more about the dialectical structure - consciousness is always intersubjective, reaching for articulation.
Jung's Symbols of Transformation. Cassirer's Philosophy of Symbolic Forms. Goodman's Ways of Worldmaking. Fodor's The Language of Thought. The idea of conducting the search for invariants via symbols is attractive … but also problematic since what Husserl called their 'saturation' tends to obscure rather than clarify understanding. Leibniz’s dream of the universalis characteristica is closer to what we are looking for … perfect symbolic thinness
Symbols accrete meanings, histories, emotional valences. They become too rich, too overdetermined. Jung's symbols are so dense with archetypal resonance that they can mean almost anything. Cassirer's symbolic forms are illuminating but verge on the unfalsifiable. By the time a symbol has enough cultural weight to be interesting, it's already carrying too much baggage to be analytically clean
Leibniz's characteristica universalis appeals precisely because it promises the opposite: a formal system where symbols are thin - they mean exactly what they're defined to mean, no more, no less. Language becomes pure syntax that can be mechanically manipulated to yield truths about the world. The dream is to make reasoning as rigorous as calculation.
Mathematics works as a language for invariants precisely because mathematical symbols resist saturation.
When I write "∫" or "∂/∂t" or "G" (for a group), these symbols have disciplined, constrained meanings. They don't vary – there is no semantic drift. They don't gather associations.
The practical success of this approach is undeniable - physics discovers its invariants (conservation laws, symmetries) by changing from natural language to mathematical language – it is precisely by abstracting to this level that we see general truths.
The Noether theorem connecting symmetries to conserved quantities is only statable in this thin, formal idiom.
Yet the most interesting human invariants - existential givens, the structure of recognition, prospect theory's aversion loss patterns - seem to exist at a level where pure formalization loses something essential. Death isn't just
∃t: life(x,t) → ¬life(x,t+ε)
The meaning of finitude, its felt structure, the way it organizes human projects – this vicissitude requires thicker description.
Perhaps the solution is stratified:
At the most formal level: thin symbols, mathematical structures (like psychophysical laws, or the temporal topology)
At an intermediate level: theoretical terms with disciplined meanings (like "transference," "loss aversion," "intentionality")
At the phenomenological level: thick description that resists formalization but reveals the lived structure (“love,” “desire,” ‘freedom” …
Therefore invariants exist at their various levels -- expressed differently depending on what kind of precision one is after – what tools one is applying – by chosen coordinate axes.
Fodor's LOT tries to do both: a formal computational system and semantic content. But it arguably achieves this only by pushing the hard questions about meaning down to "primitives" and "conceptual atoms" that remain mysterious. The point is that there are different domains for different layers of symbolization -- some are very close to 'raw experience' and some very far from it.
// Moral
The distance from raw experience isn't just an epistemic issue. It's a moral issue because the further we move toward thin formalization, the easier it becomes to lose sight of what's at stake for actual people. Transition to the thin vocabulary often loses the magical kernel.
A policy analyst working with utility functions and discount rates is operating with very thin symbols, very far from raw experience. That distance enables powerful analysis - but it also makes it easier to forget that behind "units of utility lost" are people experiencing grief, pain, fear. In a case like this, formalization can become a kind of moral anesthetic. Conversely, staying too close to raw experience - pure phenomenology, thick description - can make it impossible to see where one is – or think systematically about tradeoffs, to compare cases, to identify patterns to help more – more people more effectively. Moral reflection requires moving between levels: formalizing enough to think clearly and act systematically, returning regularly to the phenomenological level to check whether the abstractions are still tracking what matters. The invariants we identify at each level need to remain answerable to the levels above and below. In clinical work especially, this movement seems essential - you need theoretical frameworks (middle level) and maybe even some formal models (thin level), but you also need to remain in contact with this person's suffering, this irreducible (thick level). The invariant is 'right now.'
1. Beginning from Orientation in World Philosophy
In addition to teaching I also see patients. One of them said to me that my work in philosophy seems to look for mathematical and philosophical invariants. This person asked me: what do you see as invariant in everyday life? I answered with a question -- what is the same through human change? I thought of historical responses to the question and ultimately to the 'existential givens' pointed out by thinkers like Jaspers, Heidegger, and de Beauvoir. I thought of some ideas about basic human patterns -- such as we see in prospect theory, psychoanalysis, and psychophysics -- potential invariants from daily life. Some ideas came to mind –
The structure of temporality itself - not just that we experience time, but that we experience it with a particular topology: the asymmetry between past (fixed, known in principle) and future (open, uncertain), and the way the present moment has a peculiar "thickness" to it. This is invariant across cultures and epochs in a way that's almost too obvious to notice.
The gap between intention and outcome - we act in the world with purposes, yet the world responds according to its own logic. This generates surprise, regret, gratitude, learning. There is no form of human life that doesn't contend with this gap.
The need for recognition - not just Hegel's master-slave dialectic, but something more fundamental: that our sense of ourselves requires some mirroring or acknowledgment from outside ourselves. Even the hermit defines himself against society. This seems present in infancy (attachment) and persists through all human social arrangements.
The fact of embodiment with its particular constraints - fatigue, hunger, pleasure, pain, the way attention is limited and must be allocated. Even as technology extends our capacities, it doesn't eliminate these constraints; it just shifts their expression.
The basic structure of learning through error - we form expectations, they're violated, we update. This applies from motor learning to scientific revolution. Prediction error is a fundamental currency of biological intelligence.
The existential givens as the center - death, freedom, isolation, meaninglessness (in Yalom's formulation) - not just psychological facts – instead structural features of any existence that combines the human mixture // finitude, consciousness, and separateness. These are transcendental conditions for experience (loosely speaking).
Psychophysical invariants (Weber-Fechner laws, Stevens' power law) suggest that even subjective experience has a mathematical structure that's remarkably stable across individuals – we are sensitive to thresholds and plug them in whether they belong or not.
Philosophy: something invariant in the form of human questioning itself - not the content, but the fact that we ask "why?" in a way that seems qualitatively different from other animals. We are perpetually troubled by justification – the space between is and ought.
2. Criticism from Habermas
Habermas was an important influence in my earlier thinking. He challenges the search for human universals as another face of idealism, Platonism, another attempted escape from contingency and the messiness of everyday life in search of pure ideals and beautiful abstractions. I see my universalist aspirations in a different light -- as looking to logical simplicity at the foundations -- but perhaps Habermas' criticism is fair.
Habermas' criticism is sharp – worth taking seriously. I think I can answer it.
The crucial distinction seems to be between seeking invariants as a way to escape contingency versus seeking them as a way to understand the structure within which contingency occurs and operates. These are radically different projects.
The escapist version - what Habermas rightly criticizes - treats universals as a realm of purity untainted by history, culture, bodies, power. It's the move that says "never mind all this messy lived experience; the real truth is up here in the realm of forms." This becomes ideological when it uses abstraction to avoid responsibility, to deny the particular claims of actual suffering people, to dismiss context as merely "accidental."
In my framing - "simplicity of foundations" – the idea is more what we might call structural realism or perhaps even transcendental inquiry in the Kantian sense: asking what must be the case for this particular (the contingent, messy, historical phenomenon before us) even to be possible at all? This is not escaping contingency but asking about its conditions.
Consider: evolutionary biology seeks invariant principles (natural selection, genetic inheritance) – not in order to deny the riotous diversity and contingency of life, but to explain how such diversity is generated and constrained. The invariants don't make the particulars less real or important - they make them intelligible.
In clinical work: understanding something like the process of transference as a relatively invariant pattern doesn't deny the absolute uniqueness of each therapeutic relationship.
It empowers a framework within which to understand what makes this relationship, with this person, at this moment, matter in the way that it does – distinct from every other.
Habermas himself, ironically, can't entirely avoid the search for invariants. His theory of communicative action posits certain universal pragmatics - conditions that must hold for genuine communication to occur. He's looking for the invariant structure of non-distorted communication. So even in criticizing the search for universals, he engages in it - just with different content and (crucially) a different attitude toward what finding invariants means.
Perhaps the key is this: Is one seeking invariants in order to stop the conversation ("here's the eternal truth, end of story") or to enable better conversation by clarifying what we're all working with? The former is the Platonic temptation Habermas warns against. The latter is what science does, what philosophy does, what clinical wisdom does – when things work.
Weber's law doesn't make particular sensory experiences less rich or real – but it describes a relationship that seems to hold across those experiences. It's a regularity discovered in the mess, not imposed from ‘outside’ to clean it up.
There's also the question of scale and scope. Some invariants are narrow and technical (prospect theory's specific parameters), others broad and abstract – (lived temporality, embodiment). The broader they get, the more careful we need to be that we're describing structure rather than smuggling in particular cultural content disguised as universal form.
Cheap overgeneralization smuggles in cultural baloney – easy to miss.
The danger isn't abstraction or generalization per se - it's unacknowledged particularity masquerading as universality. When 18th-century European thinkers proclaimed universal "Reason" while actually describing the cognitive style of educated European men, that's smuggling. When psychologists claimed universal human laws based on WEIRD populations (Western, Educated, Industrialized, Rich, Democratic), that's smuggling.
But the solution isn't to abandon the search for invariants - it's to be rigorous and honest about:
The domain of application (what populations, contexts, scales?)
The level of abstraction (formal structure vs. concrete content?)
The evidence base (how do we know this holds across variation?)
The purpose (why does identifying this regularity matter?)
Prospect theory's loss aversion, psychophysical laws, the existential givens - these are not in any sense Western constructs dressed up as human nature.
They are universal.
3. Human invariants / first approximation
The structure of temporality
The gap between intention and outcome
The need for recognition
The fact of embodiment and its constraints
The basic structure of learning through error
Death, freedom, isolation, meaninglessness –
structural features of any existence that combines the human mixture:
<finitude, life, separateness, consciousness>
= transcendental conditions
= tautological conditions of being a conscious, embodied creature
Regarding restricting invariance to a domain of application, this mirrors results in model theory in logic and in group theory in mathematics, i.e. invariants only make sense in a context.
An invariant is a temporally flat variable.
I.e.: invariance is always relative to some transformation group, some set of operations under which a thing remains unchanged.
In group theory, what's invariant depends entirely on what transformations you're considering. A quantity might be invariant under rotation but not under translation, or vice versa. The symmetry group defines what counts as invariant. There's no such thing as "invariant simpliciter" - it's always "invariant under G" for some group G.
Model theory makes the same point from another angle: a sentence might be true in all models of a theory (hence "invariant" across those models), but completely meaningless or false outside that theoretical context.
The invariance is internal to a structure, not floating free in some Platonic heaven.
-- -- What is ‘there’ has to do with the process engaged for finding it.
4. Invariance under transformation
What is “invariant” under what “transformations?” --
Death as an existential given is invariant under cultural transformation, historical change, individual variation - it's a structural feature of finite existence itself
Loss aversion looks to be invariant under cultural transformation but not under certain kinds of framing effects or decision contexts
Cognition in the form of basic cognitive biases look to be invariant across typical human experience – but not under conditions of expertise or training
Temporal flatness is key. When we say something is invariant, we're saying: as we move through time, across contexts, through variations - this doesn't change. It's a constant in the equation while other things are variables. The problematic move isn't finding invariants - it's claiming invariance across a transformation group larger than the evidence warrants, or worse, using the language of invariance to stop inquiry rather than to structure it.
Just as in physics, where finding conserved quantities (invariants under time translation, spatial translation, etc.) gives you the fundamental laws, finding genuine human invariants might reveal something like the "conservation laws" of human experience – these would represent the deep constraints within which all the variation plays out.
Conservation laws imply a conserved quantity – imply symmetries – thus the question what is conserved in human life? What deep conserved quantity generates the ‘human’ pattern?
5. Conservation laws
What deep conserved quantity generates the ‘human’ pattern?
Conservation laws imply symmetries.
As long as we can identify the symmetries of a system, we should be able to figure out what the conserved quantity is that generates the symmetry … this is Noether’s theorem.
Potential Symmetries / Conserved Quantities …
Translation Invariance, Duality Symmetries, Filtration Monotonicity / directional symmetry (very like entropy in thermodynamics: time-reversal breaking with a conserved arrow), Sequence Convergence …
The thick/thin distinction matters
For Thin Invariants: The symmetries are often exact -- shifting degrees, duality isomorphisms, etc. The conserved quantities are algebraic structures (grading, pairing).
For Thick Invariants: The symmetries are at best approximate or statistical: in persistent homology, features persist across parameter changes, but not perfectly -- there's "birth" and "death"
The conserved quantity could be something like total topological complexity measured across all scales, which redistributes but has some global constraint
The conserved quantity: the Euler characteristic (thin) constrains the total persistence (thick) across all filtration parameters.
Conserved charges (gauge symmetries) constrain what field configurations are possible. So, thin invariants act like a topological charge that limits how thick features can manifest.
The balance is maintained by asymptotically approaching a goal one can never reach
This makes the unreachable limit into a structural principle
Think about how this plays out:
Each turn gets you "closer" to the limiting invariant, but you never compute E_∞ directly—you approach it through successive approximations.
The information is conserved in the sense that what you learn at each stage constrains all future stages, but you can never jump to the end.
Thin invariants are what you get at the limit – they're the convergence point
Thick invariants are the trajectory toward that limit – they're the geometric path
The conserved quantity (information) ensures that:
You can't get more information at the limit than you accumulated along the way
But you can't see the "shape" of that information without tracking the levels
This is very like free energy versus entropy
Entropy (thin): The limit quantity, what's actually conserved globally
Free energy (thick): The quantity you can actually compute/measure
As temperature → 0, free energy → ground state energy, but you never reach absolute zero
The level index plays the role of inverse temperature – as you refine (k→∞), you approach the true invariant asymptotically.
Thick invariants are "computable approximations" to thin invariants equated to ideal limits.
You can't get more information at the limit than you accumulated along the way
... there's no free lunch.
That is itself (a formulation of) the conservation law.
The No-Free-Lunch Principle for Level-Indexed Invariants
This crystallizes something important:
Statement: The information content of any thin invariant (the limit) is bounded above by the accumulated information in the thick invariants (the trajectory).
You can't know more about the limit than you learned getting there.
What This Rules Out
This principle explains why certain things are impossible:
No Oracle: You can't compute H_k(X) without doing work at level k (or work that propagates from other levels). There's no shortcut that bypasses the filtration resolution/approximation structure.
No Clairvoyance: If you only compute the first N levels, you get at most N levels worth of information about the limit. Some "persistence landscapes" die off quickly. They're telling us that most of the limiting information has already been captured.
The conserved quantity (total information) acts like a budget. The geometry of the space determines how that budget gets spent across levels.
6. Self-questioning
We appear to have smuggled in an observer/computational agent. But the geometry – the manifold, the space, the structure – just is. It doesn't care about our resolution strategy.
Still: The thickness seems to capture something objective about the space.
This leads to paradox.
Paradox / Possible interpretations:
Thickness is Relational, Not Intrinsic
Maybe thickness isn't a property of the space alone, but of the space-plus-computational-framework pair.
Just as:
Energy isn't absolute but relative to a choice of coordinates/frame
Entropy depends on macroscopic variables one chooses to measure
The "thickness" of an invariant might be measuring the mismatch between the intrinsic structure and the particular algebraic/computational apparatus you're using to probe it
Agency is implicit in the geometry
Maybe the geometry does determine a preferred computational structure. For instance, a cell decomposition of a manifold has a natural filtration by skeleta (0-skeleton ⊂ 1-skeleton ⊂ ...). This isn't imposed from outside—it's latent in how the space is built up from local pieces. The "agent" is just making explicit what's already implicit in the gluing data
Thickness measures friction between scales
//
Self-doubt:
What if the agent-dependence tells us something about reality itself?
In physics, we've learned that: the speed of light isn't about light – it's about causal structure. The Planck constant isn't about quanta – it's about the geometry of phase space. The Bekenstein bound relates information capacity to geometry.
Computational complexity is a geometric invariant – something described with Big O notation – tracking an algorithm’s growth rate versus its input size.
The "thickness" measures how this geometry resists compression – how many bits one needs to specify it – this is not agent-dependent if there's a unique minimal description
Is agent-dependence a bug or a feature? (Thesis: there is no a unique minimal description)
I.e., thickness is relational, not intrinsic -- this is very like the idea in the calculus that if one zeros in very closely on a curve, we will see a straight line
In calculus, the tangent line at a point is the first-order approximation to the curve. Zoom in far enough (take the limit), and curve and tangent line become indistinguishable. But:
How quickly they become indistinguishable depends on the curvature
Higher derivatives measure how the curve deviates from its tangent
You need the full Taylor series (infinitely many levels!) to recover the curve from its tangent
Thickness might be measuring how many "orders of approximation" one needs before the approximation becomes good enough.
In this view, the conserved quantity is the gap between levels – the information about "how wrong" your current approximation is – the failure rate, error rate, error landscape …
The key insight: Whether something is "straight" (thin) or "curved" (thick) depends on what you're measuring it against.
A geodesic on a sphere curves in 3D Euclidean space, but is "straight" intrinsically on the sphere
The same space can be thin with respect to one invariant, thick with respect to another !!!
The thickness measures levels of refinement separate from an initial probe (exist in time)
What if curvature itself is just thickness in disguise?
Flat spaces: Thin (tangent structure = global structure)
Curved spaces: Thick (need many approximation orders to capture global from local)
The Riemann curvature T appears at second order in the Taylor expansion of the metric.
Spaces with curvature are "thick" because you can't understand them from tangent data alone.
If thickness is relational – if it measures the mismatch between geometric reality and our approximation scheme – the information about "how wrong" one’s current approximation is – then what is conserved is a surprise function.
This reframes everything in terms of Kullback-Leibler divergence – relative entropy – a measure of how much an approximating probability distribution Q is different from the actual probability distribution P. The divergence of P from Q is the expected excess surprise (Shannon information). More generally, we are looking at the information distance between the current approximation at level k and the truth (at the limit →∞).
At each level, you're surprised by how much the next level differs from what you expected. The conserved quantity is the total surprise you'll accumulate across all levels.
The total information-theoretic surprise Σ_k D(level_{k+1} || level_k) is bounded by (or equal to) the complexity of the limiting object.
The surprise of an event is the negative logarithm of its probability.
In the language we've been using:
Thin invariants: Low total surprise (you converge quickly, small KL divergence sum)
Thick invariants: High total surprise (each level reveals substantial new information)
7. Result: the surprise function
No Oracle Property: You can't know at level k how surprised you'll be at level k+1 without actually going to level k+1. The surprise is generated by the resolution process.
Additivity: Surprises accumulate. If you're surprised going from level 1→2, and surprised going from 2→3, the total surprise is (roughly) additive.
Budget Constraint: The geometric object has a fixed "surprise budget" to distribute across levels.
A space can choose to:
Surprise you all at once (thin - most information at one level)
Surprise you gradually forever (thick - information trickling in across infinitely many levels)
Sequences: Each differential d_r measures surprise: "I thought this cycle was permanent, but it bounds after all!"
Convergence means eventually, no more surprises
The E_∞ page is where surprise → 0
Persistent Homology:
Birth and death events are surprises: "A new cycle appeared!" / "A cycle just became trivial!"
The persistence diagram is a surprise map: it tells you when and how severely your understanding will be corrected
Long bars = delayed surprise (the feature was real for a long time before being revealed as artifact)
Taylor Series: Each derivative is measuring surprise: "The function curved more than the previous order predicted!"
Analytic functions: finite surprise budget
Non-analytic: infinite surprise budget distributed across infinitely many derivatives
Symmetry behind the conservation
Now we can answer the original question about symmetries:
The symmetry is scale invariance (or its breaking).
When the system has exact scale invariance: No surprise at any level (thin)
When scale invariance is broken: Surprise distributed across scales (thick)
The conserved quantity (total surprise) measures how badly scale invariance fails.
Surprise = information gained = entropy difference
Bayesian updating: Each level is a Bayesian update, KL divergence measures data gain
This formulation explains
--why thickness feels computational/agent-dependent (surprise is relative to expectations)
--yet geometrically meaningful (the geometry itself determines the surprise distribution).
The space doesn't care about your expectations, but it determines how those expectations will be violated as you refine your probe.
= Testable Prediction
If surprise is conserved, then:
Spaces that surprise you early (steep initial learning curve) should have less surprise later.
Spaces that seem simple at first should have more hidden surprise at fine scales.
This would manifest as:
Sequences that collapse early vs. those with persistent differentials
Persistence diagrams with few long bars vs. many short bars
`` reflection
Is the principle of least action this conservation law or should we talk about consciousness in some way that incorporates negentropy?
The principle of least action is elegant precisely because it's so fundamental - it unifies mechanics, optics, quantum theory. But it's fundamentally about physical systems finding extremal paths through configuration space. Consciousness seems to involve something stranger: not just minimizing action.
Action may not be the language to get at it (?)
Consciousness creates distinctions, maintaining improbable organized states against entropy, and - crucially - cares about the difference between states. Consciousness is the care structures Heidegger points to.
The negentropy angle (Schrödinger's "What is Life?" territory) is compelling because living systems, and especially conscious ones, are precisely those that locally reverse entropy - we build structure, we preserve information, we resist equilibrium.
Consciousness = the capacity to maintain improbable distinctions and to treat them as meaningful … (my thesis in Important Nonsense)
Consciousness is neither least action nor negentropy … we have to come at it differently …
Consciousness is language-structured, symbol-structured, in ways that affect what states are even possible …
When someone can't find words for an experience in their new language, they're not just failing to report a pre-existing inner state. Often the experience itself has a different shape, different articulation in their first language. The phenomenology is genuinely different. Wittgenstein's point is right here: the limits of my language are the limits of my world.
This complicates the search for invariants in consciousness. Maybe the invariant isn't in the content of conscious states (which are language-shaped and therefore culturally variable) but in the form: the fact that consciousness involves maintaining distinctions, that it has intentionality (aboutness, care-structure), that it involves both a temporal flow and a sense of unity, that it requires some kind of self-world boundary (intention, outcome) …
There is the invariant need to articulate experience, even when the tools for articulation are inadequate. This is existential .. in philosophy, this is the prime directive
This idea connects to recognition: consciousness might require not just internal organization (negentropy) but external validation through symbolic exchange. The symbol is a way of locating this boundary.
The invariant might be less about information-theoretic properties and more about the dialectical structure - consciousness is always intersubjective, reaching for articulation.
Jung's Symbols of Transformation. Cassirer's Philosophy of Symbolic Forms. Goodman's Ways of Worldmaking. Fodor's The Language of Thought. The idea of conducting the search for invariants via symbols is attractive … but also problematic since what Husserl called their 'saturation' tends to obscure rather than clarify understanding. Leibniz’s dream of the universalis characteristica is closer to what we are looking for … perfect symbolic thinness
Symbols accrete meanings, histories, emotional valences. They become too rich, too overdetermined. Jung's symbols are so dense with archetypal resonance that they can mean almost anything. Cassirer's symbolic forms are illuminating but verge on the unfalsifiable. By the time a symbol has enough cultural weight to be interesting, it's already carrying too much baggage to be analytically clean
Leibniz's characteristica universalis appeals precisely because it promises the opposite: a formal system where symbols are thin - they mean exactly what they're defined to mean, no more, no less. Language becomes pure syntax that can be mechanically manipulated to yield truths about the world. The dream is to make reasoning as rigorous as calculation.
Mathematics works as a language for invariants precisely because mathematical symbols resist saturation.
When I write "∫" or "∂/∂t" or "G" (for a group), these symbols have disciplined, constrained meanings. They don't vary – there is no semantic drift. They don't gather associations.
The practical success of this approach is undeniable - physics discovers its invariants (conservation laws, symmetries) by changing from natural language to mathematical language – it is precisely by abstracting to this level that we see general truths.
The Noether theorem connecting symmetries to conserved quantities is only statable in this thin, formal idiom.
Yet the most interesting human invariants - existential givens, the structure of recognition, prospect theory's aversion loss patterns - seem to exist at a level where pure formalization loses something essential. Death isn't just
∃t: life(x,t) → ¬life(x,t+ε)
The meaning of finitude, its felt structure, the way it organizes human projects – this vicissitude requires thicker description.
Perhaps the solution is stratified:
At the most formal level: thin symbols, mathematical structures (like psychophysical laws, or the temporal topology)
At an intermediate level: theoretical terms with disciplined meanings (like "transference," "loss aversion," "intentionality")
At the phenomenological level: thick description that resists formalization but reveals the lived structure (“love,” “desire,” ‘freedom” …
Therefore invariants exist at their various levels -- expressed differently depending on what kind of precision one is after – what tools one is applying – by chosen coordinate axes.
Fodor's LOT tries to do both: a formal computational system and semantic content. But it arguably achieves this only by pushing the hard questions about meaning down to "primitives" and "conceptual atoms" that remain mysterious. The point is that there are different domains for different layers of symbolization -- some are very close to 'raw experience' and some very far from it.
// Moral
The distance from raw experience isn't just an epistemic issue. It's a moral issue because the further we move toward thin formalization, the easier it becomes to lose sight of what's at stake for actual people. Transition to the thin vocabulary often loses the magical kernel.
A policy analyst working with utility functions and discount rates is operating with very thin symbols, very far from raw experience. That distance enables powerful analysis - but it also makes it easier to forget that behind "units of utility lost" are people experiencing grief, pain, fear. In a case like this, formalization can become a kind of moral anesthetic. Conversely, staying too close to raw experience - pure phenomenology, thick description - can make it impossible to see where one is – or think systematically about tradeoffs, to compare cases, to identify patterns to help more – more people more effectively. Moral reflection requires moving between levels: formalizing enough to think clearly and act systematically, returning regularly to the phenomenological level to check whether the abstractions are still tracking what matters. The invariants we identify at each level need to remain answerable to the levels above and below. In clinical work especially, this movement seems essential - you need theoretical frameworks (middle level) and maybe even some formal models (thin level), but you also need to remain in contact with this person's suffering, this irreducible (thick level). The invariant is 'right now.'
No comments:
Post a Comment