986 views
# My Views on Conciousness This is my crackpot, un-cited theory of conciousness. [TOC] *Note: I use computer science language here to describe the flow of data through the brain in an information theoric sense, not because I actually believe the brain behaves like a computer.* ### my theory of conciousness conciousness = overlapping subsystems with no single bound experience at the center. linearization of the experience is only achievable on read-back from memory. no linear experience exists before that point, as the brain is constantly re-ordering events (e.g. during saccades) and writing things out of order to linear memory. conciousness exists at the computational process layer (i.e. conciousness is at the behavior layer, not the algorithm layer, or hardware implementation layer), and is not hardware dependent (i.e. it can be simulated with a turing machine of sufficient size and complexity, though it is more efficient with wetware than software). #### conciousness requires 3 main subsystems - a collection of input streams (available in parallel). this is what the brain is capable of being aware of. these input stream states are stored in the "L1 cache" of the brain, aka active ion channels, dendrite receptor states, action potentials of neurons attached to our retina, amigdyla, and various other input subsystems. - an attention "gearbox" mechanism that can shift attention between input streams. this attention mechanism is simple, and only decides which L1 cache states to symbolize and save into linearized experiential memory (aka the brain's "RAM"). You can attent to more than one input stream at once, but more than a handful taxes the brain and leads to decreased resolution and more errors when saving to RAM. this attention mechanism alone is not sufficient to be concious, it's only a gearbox that changes which L1 states get saved to RAM, and there is no "awareness" within the gearbox machinery. other subsystems of the brain can fire interrupts of different strength to request attention be shifted to them. - linearizable short-term memory, i.e. "RAM". this is where the brain stores recent qualia tokenized into symbols/quail state chunks, and is readable in a linear or random fasion. this RAM is only able to hold one or two "sentences" worth of thought at a time (though not necessarily in words, these can be other symbols/qualia chunks). conciousness experience arises only when we *read* back from RAM, as up until that point, events are not linearly ordered in time within the brain. the reading mechanism is the narrative that we experience as the smallest "concious" unit of experience within the brain, however it alone is not sufficient to experience anything without the prerequisite input, attention, and memory that allowed that experience to form. this is why I don't think it's reducable below these 3 algorithmic components + the RAM-reading mechanism. changing the structure or behavior of any of these 3 components / the reading mechanism would also drastically redefine the experience, to the point that it might not be considered concious in the same way we consider humans (and some animals) concious. --- ### my answer to the binding problem conciousness is not bound, it describes a complex phenomena emergent from overlapping systems, and exists on a gradient. it cannot be condensed to any one physical point within the brain due to speed-of-light delays. removing single neurons at a time from a concious human brain would gradually lower its level of concious experience on the gradient. eventually a removed neuron would break one of the fundamental 3 systems above or lower its resolution to the point of unusability, and the entity would be unconcious. ### my answer to the boundary problem conciousness exists on a gradient, but if you pick an arbitrary point on the gradient to consider "concious", then you can draw a corresponding line somewhere within the brain dividing what overlapping subsystems and cells are necessary to satisfy that level, and exclude what exists outside that line. all the atoms within those cells exist as pertubations of underlying fields, and similarly we could draw lines within those fields demarking what is necessary for our threshold of concious experience, and what is not. ### my answer to frame invariance conciousness is not frame invariant. due to speed of light delays and physical distance between areas of the brain, the ordering of events within a concious experience is not even globally linearizable, let alone frame invariant. the order of events within a brain depends on the point position of the observer in phsyical 3d space. from a practical standpoint, this means different overlapping subsystems experience different temporal ordering perspectives on each qualia that the whole system processes. the complex emergent concious experience that arises also experiences all the same corresponding bugs and glitches that we see with non-linearizable distributed systems in computer science (e.g. race conditions, out-of-order events, events processed more or less than one time, consistency errors, etc.). this is a fundamental property of all multi-component systems separated by space, events within the subsystems are not globally temporally orderable or frame invariant. --- ### other subsystems I don't think are necessary for conciousness: - medium or long-term memory - emotions / other neuromodulator-style systemic alterations - input hardware or processing subsystems like eyes/vision, ears/hearing, etc. - output hardware/subsystems like voice, movement, etc. - meta-awareness / self-referential ability (this can be simulated iteratively anyway and is not a special marker of anything imo) --- ### links - [McClamrock: Marr's Three Levels](https://www.albany.edu/~ron/papers/marrlevl.html) - [ [Andrés Gómez Emilsson: Digital Sentience: Can Digital Computers Ever "Wake Up"? (YouTube)](https://www.youtube.com/watch?v=IlIgmTALU74) - [The Egg](http://www.galactanet.com/oneoff/theegg_mod.html) - [“The Last Question” by Isaac Asimov](https://www.multivax.com/last_question.html) - [“The dream of life” by Alan Watts](https://nicksweeting.com/d/dream.mp3) --- ## Strict Serializability of Qualia is a Myth The fact that we can recall sight, smell, emotion, surrioundibng geography, conversation topic, all bound together into a single retrievable *orderable* episode of experience - that's staggeringly impressive distributed systems coordination. Each of those elements was processed by completely different neural subsystems with different timescales and optimization goals, yet somehow they all got time-stamped and cross-referenced in a way that allows coherent reconstruction. Memory seems like it has two possible architectures it can trends towards, either each brain subsystem has its own specialized memory encoding/retrieval mechanism (which would be incredibly complex to coordinate), or there's some kind of master synchronization system that can phase-lock the entire brain during memory formation. (Maybe seizures are what happens when that synchronization system goes haywire, forcing the whole brain into unwanted phase coherence?) If memory formation requires some form of synchronization or coherent encoding, then maybe that's actually when consciousness "happens." Not during the initial distributed processing, but during the memory consolidation phase when all those parallel streams have to be integrated into a coherent, storable representation. Consciousness might literally be the experience of your brain synchronizing itself for memory encoding. Or it could be the experience of your brain synchroizing itself for *recall*. I'd imagine a few different attention pools in different areas with sort of ring buffer structures to capture incoming interrupts / requests for attention / global phase sync originating from that area. This would explain why we don't remember most of our moment-to-moment experience - not because it wasn't processed, but because it never triggered the synchronization process that creates memorable, conscious experience. Only high-salience events or deliberate attention would activate this full-brain integration mode. It makes me wonder about dreams, which seem to involve weird memory consolidation processes. Maybe dreams are your brain running memory integration protocols offline, trying to make sense of all the distributed processing that happened during the day. The bizarre temporal ordering and impossible scene transitions in dreams might reflect the challenge of synchronizing subsystems that were never meant to be perfectly aligned. If consciousness emerges from the challenge of integrating distributed processing into coherent, storable representations, then maybe the convergent structures we're seeing in AI models reflect fundamental constraints on how you solve this integration problem. There might be universal principles governing how you efficiently bind distributed computations into unified representations that can be stored and retrieved. Procedural memory (like riding a bike) might stay distributed across subsystems without requiring full integration, which is why it's so durable but hard to verbalize. Episodic memory requires the full synchronization process, which makes it more fragile but also more conscious and communicable. Also I have a theory that cats straight up don't have (or barely have) episodic memory. It's all low level vibes. The coherence might also be largely illusory, constructed at recall time rather than encoded during the original experience. When you access that "coherent" decades-old memory in different contexts, you might actually be getting four different reconstructions, each optimized for the current retrieval context rather than faithful to some original unified encoding. This completely flips the memory-as-equalizer idea on its head. Instead of memory proving that consciousness creates unified experience, it might prove the opposite - that consciousness is fundamentally a reconstruction process that creates the illusion of coherence by weaving together whatever distributed fragments happen to be accessible at recall time. So maybe there's no master synchronization system at all. Each subsystem just dumps its optimized, filtered outputs into whatever storage mechanisms it has access to, with loose temporal associations but no true binding. When you "remember" an episode, your brain is essentially doing the same kind of narrative construction process in real-time, grabbing visual fragments from one storage system, emotional traces from another, linguistic elements from a third, and weaving them into a story that feels coherent and unified. I guess that's why memory recall can feel somewhat "laggy", light different elements of an episode coming back to you slowly as you think about it harder. This would explain why consciousness feels so seamless and unified despite being built from massively parallel, asynchronous processing. The reconstruction process is incredibly sophisticated at creating coherent narratives from fragmented async inputs with lag and jitter, whether those inputs are coming from current sensory processing or from memory storage. The feeling of unified experience is the subjective signature of that reconstruction algorithm running successfully. It makes me wonder about confabulation, false memories, and eyewitness unreliability in a whole new light. These aren't bugs in the memory system - they're features of a reconstruction process that prioritizes narrative coherence over literal accuracy. Your brain would rather give you a plausible, coherent story than admit that any fragments don't actually fit together properly.