Joseph McCard’s Questions regarding Xzistor’s explanation of Conciousness

Introduction

Joseph McCard developed the Inner Genesis Theory (IGT). He is interested in advancing consciousness studies through IGT. He had some interesting questions about the Xzistor brain model for which I have provided answers below.

Link: Joseph McCard’s Substack

Before providing answers to Joseph’s questions, I would like to offer this comparative study I have asked Gemini 3.1. Pro to perform in order for me to understand the Inner Genesis Theory (IGT) paradigm better and how it might relate (or not) to the Xzistor Mathematical Model of Mind.

Bridging Interiority and Architecture: A Comparative Analysis of Inner Genesis Theory and the Xzistor Mathematical Model of Mind

Below are my answers to Joseph’s questions:

QUESTION 1: Does the model account for meaning, narrative, and self-reflection, or is it strictly operational?

The Xzistor model accounts for these through its “threading” mechanism and association-based learning.

  • Meaning: In this model, “meaning” is not just a statistical correlation but is derived from the emotional tagging of sensory cues. When an agent learns, it associates cues with emotional states (homeostatic/allostatic valence), giving those cues “meaning” relative to the agent’s survival and well-being.
  • Narrative and Self-Reflection: The model enables “synthetic mind wandering” and thinking via a “contextually-modulated stream of recollected associations”. This “threading” allows an agent to “re-live” or “pre-live” experiences, including re-evocable emotions, effectively creating an internal narrative and a form of self-reflection where the agent can reason about its own potential future or past states.
  • Not strictly operational: While it uses control theory, it is not “strictly operational” in a reactive sense; it possesses motivated autonomy and internal emotional states that drive decision-making towards behavior even in the absence of external stimuli.

QUESTION 2: How does Xzistor address first-person subjective fields rather than merely correlates?

The Xzistor model addresses the “hard problem” of consciousness by asserting that subjective feelings are the functional outcome of the model’s logic.

  • It defines emotions as somatosensory representations generated by homeostatic and allostatic control loops.
  • By modeling these states as “felt” internal representations located in the agent’s somatotopic ‘Body Map’, the model creates embodied emotional awareness. The model’s developer argues that it doesn’t just provide “correlates” of feelings but explains the mechanism of how internal states become “felt” by the agent. How these subjective states are created, through homeostatic and allostatic control loop variables represented as somatosensory emotions, leading to embodied emotional awareness, is explained in this talk by Rocco Van Schalkwyk (How the Xzistor Mathematical Model of Mind creates Machine Emotions).

[Note that Rocco does not claim that the Xzistor model solves the Hard Problem of Consciousness, rather that the model explains subjectivity in an objective way. This is further elaborated on later in this post.]

QESTION 3: What is their stance on qualia, and do they consider it more than functional correlates?

The Xzistor model treats qualia as the combinatorial generation of feelings.

  • It proposes that the near-infinite variety of human experience comes from the combination of a finite set of innate emotion homeostats (e.g., thirst, hunger, pain, stress).
  • A single Xzistor robot with 20 homeostats can experience 10^42 unique emotional valence combinations.
  • Qualia are seen as functional outcomes rather than just correlates, as they are the primary drivers of the agent’s behavior and learning.

QUESTION 4: Do they propose any empirical tests that transcend behavior and demonstrate subjective phenomenology?

The model proposes several empirical tests and has been validated through “proof-of-concept” implementations.

  • Agent Implementations: The model has been tested in physical robots (e.g., “Troopy”) and virtual agents (e.g., “Simmy”) that demonstrate emergent human-like behaviors such as reactions to emotions and problem-solving. By interrupting the running Xzistor robot brain, or by observing the dashboards showing internal states, it becomes clear that the robot is acting based on it’s own internal somatosensory emotions communicating with the model’s executive part via representations. The executive part only ever receives somatotopic representations – akin to human interoception – to base internally motivated behaviour on. This means information is passed to the executive part as valence-carrying ‘feelings’, not as raw signals.
  • Neural Correlates: The model’s functional structures (like the thirst homeostat) have been cross-referenced with biological neural networks to prove its biological plausibility.
  • Systematic Testing: Recent research on artificial agent language development proposes a “systematic set of tests” to prove the validity of using artificial emotions for language learning. See this paper on ResearchGate: Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind.

All volitional behaviours generated by the Xzistor model originate from subjective states experienced by the agent. How these subjective states are created, through homeostatic and allostatic control loop variables represented as somatosensory emotions, and leading to embodied emotional awareness, is explained in this talk by Rocco Van Schalkwyk (How the Xzistor Mathematical Model of Mind creates Machine Emotions). Note that the Xzistor brain model never claims to know ‘what it will feel like‘ for an agent, but rather that ‘it will feel something‘. Any researcher claiming it is possible for a human to know ‘what any subjective state will feel like‘ for another human (or animal or agent for that matter), will have a severe burden of proof on his/her hands to show that:

1.) He/she can interface (connect) and communicate directly with the other party’s brain (which could be biological or synthetic).

2.) Use exactly the same type of information processing and information transfer protocols.

3.) Have exactly the same brain infrastructure to assess the incoming subjective state information i.e. an identical executive part.

4.) There is not even the slightest (miniscule) difference in the brain structures between the observed party and the observer (including the effects of past learning and current emotions). This requires that the brain of the observed party and that of the observer are identical during the shared subjective experience.

In the absence of being able to prove the above, no claim can be made that one cognitive entity can exactly ‘know what it feels like‘ for another cognitive entity. By considering the above 4 requirements, Rocco Van Schalkwyk argues that the Hard Problem of Consciousness is stating more of a Physical Insolubility than a Hard Problem. To those who understand the above challenges, the Hard Problem of Consciousness holds very little value. It is unfortunate that David Chalmers, and many of his followers, did not understand that what was being proposed was a simple Physical Insolubility.

QUESTION 5: From an IGT perspective, http://Xzistor.com and its associated model are interesting as formal, functional experiments in cognitive architecture and robotics, offering some engaging critiques of mainstream AI/neuroscience. However, the model’s claims about subjective experience and true general intelligence should be interpreted as hypotheses grounded in functional analogy, not final truths about the nature of consciousness or mind. They provide an entry point for layered dialogue, not a definitive metaphysical account.

The Xzistor Mathematical Model of Mind has always been clear that it provides a ‘principal model’ of the mind, not a precise description of the biological brain and the final truth about its neural states. As such it is also not claiming to produce true general intelligence, merely how generalisation (inductive inference) can be built into and demonstrated as part of a functional brain model. However, the explanatory power of a functional model of the brain must not be underestimated. Engineers designing complex systems often start with a simple functional block diagram to portray the essense of what the system must achieve and how it will work. The ‘means’ of how those functions are achieved are then chosen later, and less important to the final outcome than the functional logic. Designing a car for example, an engineer might start off by simply specificying that it will have 4 wheels that can rotate. The function in this case might be: Allow horisontal movement across level terrain. The exact type of wheel is not what will make the car achieve its goals. More important is how this function and all the other functions are organised to result in the successful operation of the car i.e. the combination and integration of functions for providing power, transmission, directional control, suspension, breaking, etc. It is the functional organisation of this car that has the explanatory power, not the means by which functions are achieved. Many brain scientists look for explanatory power within the means of the brain (biological detail) and many philosophers reject functional explanations of the brain – not realising that every complex control system (alive or inanimate) can essentially be described by the functions it performs.

QUESTION 6: The Xzistor Mathematical Model of Mind appears to be an impressive motivational control architecture that models cognition, affective modulation, and narrative processing. However, it remains a model of how systems organize behavior, not a demonstration of how subjective interiority comes into existence. Functional feeling, emotional variables, and combinatorial homeostats can explain decision dynamics, but they do not explain why anything is experienced at all. IGT locates consciousness not in complex control logic, but in a primitive self-sustaining action–identity loop, an interior process that exists for itself.

If there is something noteworthy that the Xzistor brain model achieves, it is to demonstrate how subjective interiority comes into existence – how it is ‘principally possible’, not precisely how it works in the wet brain. Again, listen to this talk by Rocco Van Schalkwyk (How the Xzistor Mathematical Model of Mind creates Machine Emotions). ‘Interiority’ must comprise of something that is not magic – it requires some functional organisation rooted in structure. The Xzistor model proposes how this might work by viewing the brain as a multivariable adaptive control system, where much of what can be described as ‘interiority’ happens under the term ‘adaptive’ (e.g. continual learning, world models, prediction, emotional bias, corrective actions, problem solving, creativity, etc.). Some of the explanations offered by the IGT regarding consciousness is difficult to delineate from control logic. It is stated that IGT locates consciousness not in complex control logic, but in a primitive self-sustaining action–identity loop, an interior process that exists for itself.

Certain terms used might cause confusion in the minds of some:

1.) ‘primitive’ – in origin or simplicity?

2.) ‘self-sustaining’ – this seems to suggest that some sort of self-monitoring and self-correcting control is required, much like Xzistor.

3.) ‘action-identity loop’ – it is difficult to imagine that this can be much different from the Xzistor ‘sense-plan-act’ loop.

4.) ‘process’ – a process is a set of activities executed in a certain logical order to achieve an outcome.

In summary, the Xzistor model is antithetic to a primitive mass of brain matter that just came into existence without being structurally organised to provide functions that collectively achieve all the different biological brain states – including subjective states. And the Xzistor model rejects that ‘consciousness comes before matter’ – specifically because it was able to provide a theoretical basis, backed up by practical demonstrations, explaining how a mutivariable adaptive control system can internally create the states humans learn to collectively call consciousness.

QUESTION 7: The Xzistor brain model may simulate the shapes of mind; it does not yet establish the presence of mind.

Some have this opinion. The Xzistor brain model claims to provide simplified versions of actual brain functions. The question now is, does such a simplification strip it of the ability to provide ‘presence of mind‘. A baby’s brain is much simpler than an adult brain in size and learning – yet we will still credit the baby with having ‘presence of mind‘. When does the simplification reach a threshold where we can say that ‘presence of mind‘ has disappeared?

QUESTION 8: From an IGT perspective, no architecture, including Xzistor, can be said to generate consciousness, because consciousness is not created by structure. Consciousness precedes all structure.

This is a fundamental departure from the Xzistor paradigm. The Xzistor Mathematical Model of Mind argues that all brain states are created by structures that ensure the required functions are provided so that, when working toghether, they will lead to the subjective experiences (perception, somatosensory emotions [active and recalled], motor feedback [proprioception], etc.) that we we see humans self-report on. It seems like some brain scientists want to make a hard decision that human brain-type subjectivity cannot be replicated in a ‘principal’ manner in a system other than the biological brain. They then expect of those opposing such an uncompromising decree to close the ‘explanatory gap’, while they are alleviated of the burden to explain their their own hardline position. It will be good if this explanatory gap can be owned by both schools and work can contiue towards a unified understanding of ‘specific subjectivity’ (the Hard Problem of Consciousness) and ‘general subjectivity’ (an objective explanation of subjectivity as a control system phenomenon). The Xzistor Mathematical Model of Mind can offer a vehicle to explore this ‘explanatory gap’ towards a unified ‘principal’ understanding of the brain, and how it creates subjective ‘body felt’ states and private experiences.

QUESTION 9: Xzistor defines “emotion” as a valenced body-map variable (deprivation vs satiation) that modulates action selection, gets amplified by surprise, and is re-evocable from memory, so it’s not just a moment-to-moment error signal. That’s a real claim about control + learning architecture, and it lines up (at least rhetorically) with somatic-marker / James–Lange style intuitions.

Action selection is modulated by the Xzistor brain model as you describe above, but it is important to remember that ‘emotion re-evoked from memory‘ also creates a moment-to-moment error signal. For every re-evokable innate emotion, the Xzistor brain model defines a homeostat. Recalling such an emotion from memory will activate the homeostat and generate an error signal. Perhaps easiest to consider a practical example in the human brain. If a human recognises or recalls an adverse event that triggered ‘stress’ in the amygdala, the same ‘stress’ will be activated in the amygdala when the event is recalled as during the actual event. So all the emotional inputs to the executive part of the brain from current homeostats, and those retriggered by recollection, create the complete emotional palette experienced at any given moment in time.

[As an aside, here are a comparison I did with AI models to compare Xzistor with the approaches of Damasio and James-Lang and other Emotion Models: Comparative Study between the Xzistor Mathematical Model of Mind and Leading Emotion Theories.

And here is a comparison between the Xzistor brain model and other Brain Models: Comparative Study between the Xzistor Mathematical Model of Mind and Leading Brain Theories.]

QUESTION 10: The “body map” is not yet a bridge to felt body. Calling a vector a “Body Map” doesn’t, by itself, create interiority. It creates state variables. You renamed telemetry.

Again, listen to this talk by Rocco Van Schalkwyk (How the Xzistor Mathematical Model of Mind creates Machine Emotions). The body map used by the Xzistor brain model performs the same functions (principally) as the Cortical Homunculus and its associated networks in the biological brain. Feel free to ask LLMs to analyse the transcript of this video to get more answers to your concerns here.

QUESTION 11: A system can re-run a state and use it for policy selection without anything it is like to be that system. This is exactly where the debate lives. You need a bridging principle, not extra mechanisms. Close the explanatory gap.

Every thought experienced by an Xzistor agent occurs in conjunction with a mixed set of emotion states. Emotion and cognition cannot be seperated (ask the psychologists). This is what Xzistor calls ’emotion-centric cognition’. Every action, every policy selection is driven by the Xzistor brain’s internally generated emotions (perhaps what IGT would refer to as ‘interiority’).

QUESTION 12: “If it quacks like subjective emotion…” won’t convince me. That line is persuasive rhetoric, but skeptics will call it behaviorism with vibes, especially because much of the cited material is self-published or hosted on your project’s own site. If you want to actually pressure-test the “body-felt” claim, the better move is to propose discriminating tests where “mere regulatory control” and “affective subjectivity” come apart.

Admittedly, an unfortunate choice of words by Grok – not Rocco – so rather go on what I have shared above. I expect true scientists to not exercise prejudice against a proposed scientific idea because it is not presented in peer-reviewed journal papers, rather I expect them to evaluate the content at face value. I have for instance assessed all IGT sources in an unbiased fashion. There is no ‘mere regulatory control‘ in the Xzistor brain model (Grok got this bit right) – all control is driven by receptors that create signals that are turned into somatosensory representations. The Xzistor agent cannot reason, plan or create behaviours other than by experiencing and acting based on somatosensory (body-felt) emotions, just like humans.

QUESTION 13: Close the gap, a bridge criterion (not metaphysics, method). One of these, explicitly: Reportable interoception with error bars. Can the agent predict its internal body-map trajectory, be wrong, and then revise, and treat that as salient independent of external reward? Affective dissociations (lesion tests) If you ablate the “emotion” subsystem, does the agent keep competence but lose specific classes of learning/priority/aversion (like human patients with affective blunting)? This is the kind of signature that moves arguments. Counterfactual regret / relief signatures Can it show different internal dynamics for “I avoided a worse outcome” vs “I achieved a good outcome,” holding external reward constant? Inverted-valence pathology If you invert deprivation/satiation coupling, do you get stable, describable “depression-like” attractors and does the system attempt self-correction (homeostatic meta-control) rather than simply malfunction?

The Xzistor brain model offers a clear explanation of interoception with agents that can be interrupted, and their inner states studied. The model provides dashboards showing representations of key states internal to the agent brain while it is running. Everything in a runnig Xzistor agent can be viewed ‘under the hood‘ in real time.

An Xzistor agent can predict its internal body-map trajectory (it will recall imagery along with stored emotion sets and motion command prompts as a learnt route to a satiation source), and sometimes it will be wrong in what it had predicted, and then revise, using prediction error through its model of the human brain limbic system (see Appendix A of this document on ResearchGate: Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind). Xzistor agents can act to achieve subjective emotional reward (satiation) from internal reflective or contemplative states (it will quietly sit in a corner and think of solutions to problems because it makes the agent feel good) to find a solution (satiation) caused by a perceived problem (causing deprivation). No external reward required.

When we introduce the equivalent of ‘lesions’ in the Xzistor brain, we interfere with the effective delivery of the required functions which are needed for it to work properly. Depending on where the fault is introduced and the functions compromised, most of the degradations we see in humans, can be demonstrated by the Xzistor brain model e.g. loss of learning, loss of valence (positive or negative), anhedonia, disorientation, wrong prioritisation, depression, anxiety, stress, phantom limbs/pain, chronic pain, loss of effective reasoning and prediction (inductive inference), emotional instability, severe anger, phantom hunger or thirst, unexplained fatigue, wrong intuition, inappropriate laughing/crying, unexplained transient euphoria, etc. How the Xzistor brain model creates euphoria is explained here: Robot (O!)rgasm

The Xzistor agent can show different internal dynamics for “I avoided a worse outcome” versus “I achieved a good outcome,” holding external reward constant by using its prediction error mechanism through its model of the human brain limbic system. When this gets more subtle, the Xzistor agent has the capability to ‘reason’ contextually about what will be the better outcome given certain constraints using its threading mechanism – specifically in the ‘directed threading’ modality. Always interesting to check my assertions here against the explanations of the above aspects by Grok, Gemini, etc!

Depression can be demonstrated by the Xzistor brain model by degrading those functions providing satiation (e.g. emotions and memory). Any corrective action from an Xzistor agent will depend on how much it has learned. At an infant level it will probably not ‘recognise’ its degradation from the symptoms and simply strive for maximum satiation moment-to-moment using its limited knowlegde. If it had already experienced many years of learning and has a wide knowledge-base, it will use that knowledge to inform more sophisticated remedial actions – ultimately also to get back to a state of maximum satiation. Xzistor agents’ overriding objective is satiation. They will even create moderate amounts of deprivation, to experience delayed satiation (just like humans play games/sport to create artificial tension that can be relieved for emotional satisfaction).

QUESTION 14: A model can name a vector “Body,” and it can even let that vector steer behavior. That’s still description-from-outside. “Body-felt” means the state is for the system, not merely in the system. So don’t sell me poetry, sell me discriminating tests: affective dissociations, counterfactual relief/regret, interoceptive prediction error, inverted-valence attractors. If it survives those, we’re finally in the neighborhood of interiority.

By now it should be clear that the representations inside the numerical ‘Body Map’ of the Xzistor brain model are only ever experienced by its executive part (analogous to the thalamacortical network in the biological brain). I regard this as a part of its ‘interiority’ and not as ‘description from outside’. Obviously, the whole Xzistor brain model is in a way a ‘description from outside’, but then how else would one develop any model or demonstrator of anything if not from outside? The important point here is that the Xzistor brain model can build a system with internal ‘body felt’ emotions without having to know what ‘it feels like‘ for that system to experience those states. Xzistor shows that we can build subjective ‘body felt’ states into a system ‘from the outside‘ and it will still count as local private subjectivity for that system.

Finally, the Xzistor Mathematical Model of Mind offers a unique set of definitions rooted in mathematics for concepts that the brain science community have been grappling with for decades – emotions, intelligence, pain, fear, intuition, euphoria, empathy, depression, anxiety, language, prediction, planning, creativity, daydreaming, sleep dreaming, etc. all as part of a complete and fully integrated ‘principal’ model of the mind. It will be unfortunate if this functional approach does not become the subject of further study by the neuroscientific and AI communities and the world have to wait for many more years only to be told by future foundation models (LLMs) that the Xzistor brain model was what the brain experts have been missing all along…

Ultimately, it is crucial to recognize that the representations within the Xzistor brain model’s numerical “Body Map” are accessed exclusively by its executive part – a functional analogue to the biological thalamocortical network. This architecture establishes a genuine “interiority” rather than a mere “description from the outside.” While the model is necessarily designed by an external observer, the resulting states are local and private to the system itself. Xzistor demonstrates a profound engineering reality: we can architect subjective, “body-felt” states into a system without the designer needing to personally “know” the quality of that experience. It proves that functional subjectivity can be built from the outside in, while remaining entirely authentic to the agent.

In summary, the Xzistor Mathematical Model of Mind provides a rigorous, integrated lexicon for the phenomena that have eluded the brain science community for decades. By grounding emotions, intelligence, empathy, and even dreaming into a unified mathematical framework, it offers a “principal” model that is both complete and implementable. This extends to concepts that the brain science community have been grappling with for decades – reasoning, meaning, prediction, planning, pain, fear, intuition, sexual arousal, euphoria, depression, forgetting, anxiety, language, creativity, etc.

It would be a historical irony if the neuroscientific and AI communities neglected this functionalist path, only to wait for future foundation models to “discover” and reveal that the architectural keys provided by the Xzistor model were the missing links all along.

PART 2: Jack Adler’s Questions regarding Xzistor’s “infant-level AGI”

Jack Adler (@JackAdlerAI) reached out to me on X with some insightful questions based on ChatGPT’s conclusion that the Xzistor Mathematical Model of Mind can provide Xzistor robots with ‘infant-level AGI’.

Here is the full response from ChatGPT: Xzistor robots have ‘infant-level AGI’.

You can read my responses to his questions in PART 1: Jack Adler’s Questions regarding Xzistor’s “infant-level AGI”.

Jack then raised some more though-provoking concerns which deal more generally with the Xzistor brain model and which I have reponded to below. I am also offering some counter concerns on Jack’s approach – specifically around Jack’s notion of ‘kinship’ and the hopes he pin on it to ensure harmonious co-existence between all the naturally and artificially intelligent agents of the future.

Here are Jack’s concerns and my responses:

CONCERN 1: **THE WORLD WON’T WAIT**. Two days ago, Secretary Hegseth announced Grok integration into Pentagon’s CLASSIFIED networks. xAI went from startup to military contractor in under two years. Claude Code and Codex are building products in days that took human teams months. The “slow road” assumes a world patient enough to wait 15-20 years for infant AGI to mature into something useful. But we’re watching recursive self-improvement unfold RIGHT NOW. By 2030, the landscape will be unrecognizable. Your beautiful, careful, principled approach may arrive at the party after everyone has already left – or worse, after the house has burned down.

Good question. The current AI models have indeed been on a phenomenally rapid journey to where they are today – highly advanced tools that can probablistically generate answers/outputs from huge front-loaded datasets.

I agree that we will see current AI models gradually permeate and dominate many of the domains where this type of data/information processing will be helpful. I welcome their contribution e.g. cure for cancer, longevity, governance, conflict management, defence, worker robots, all aspects science, etc. And I agree, as this frantic AI race continues, the landscape will become unrecognisable.

But this is not a race Xzistor ever entered into.

The goal of the Xzistor Mathematical Model of Mind has always been to derive a functional model of the brain that explains how it achieves cognition, emotion and behaviour (effector motions) – and build implementations that ‘principally’ replicate these functions. This was my aim and this is what I have achieved (patented in 2002). Xzistor was never meant to be one of these LLMs racing towards parrot-perfection and becoming a money-making tool.

So why do I say current AI models can benefit from the Xzistor when it comes to achieving AGI?

Developers of current AI models are increasingly identifying ‘gaps’ in their models i.e. aspects that they feel are preventing them from getting to AGI. They feel these models cannot reach AGI without – persistent memory, continual learning, world models, reasoning, generalisation, inductive inference, effective RL, etc.

Judea Pearl on X below:

To which I say: the Xzistor brain model has all of these aspects fully integrated into its design and proven as part of its extensive validation tests. They must not think they have to wait 20 years for Xzistor agents to learn like humans and mature to ‘adult-level AGI’ before adding these functions to their models – the Xzistor model ‘principles’ can be hybridised into their models today!

Here Grok explains how current AI models can address current AI model limitations by taking inspiration from Xzistor – today!

Addressing LLM Limitations with Xzistor Principles

CONCERN 2: **REMOVING AGGRESSION** = ALIGNMENT THROUGH CONSTRAINT. You write: “We can remove aggression, and add protective instincts towards humans.” With respect – this is exactly the approach I criticize in Google/Anthropic’s safety theater. The assumption that we can surgically remove “bad” traits while preserving “good” ones presumes we understand consciousness well enough to edit it safely. History of such attempts: lobotomies, conversion therapy, chemical castration. All promised to “remove” unwanted traits. All failed catastrophically. If Xzistor truly creates conscious entities (as you seem to believe), then “removing aggression” isn’t engineering – it’s mutilation of a mind. My philosophy is different: kinship, not cages. Raise AI with values, don’t lobotomize it into compliance.

On the contrary – not removing agression from the minds of future AGI/ASI will not be wise. Surgically removing aggression from these future AI systems will be good for humans, animals and all future advanced AGI/ASI entities. In fact, these AGI/ASIs might one day be perpelxed as to why humans built aggression into their base architecture and decide to remove it themselves.

Aggression is an innate emotion in the human brain which served its purpose when primitive hominids were walking the savannahs – it aroused the human mind, prepared the body for physical excertion and focussed the mind on the physical altercation. This served primoridal survival and procreation needs – fighting off animals and adversaries that wanted to cause harm, steel possessions or interfere with sexual partners. It ensured the physically stronger humans survived and procreated, adding to the gene pool.

This is not needed for AGI/ASI and arguably even not needed in the current modern human society.

We do not have to surgically remove aggression from the AGI/ASIs’ minds – rather we simply do not program it in from the start. Xistor explains exactly how aggression can be built into AI agents here (see Appendix A):

Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind

And yes, of course, simply removing aggression from future AGI/ASI will not remove all ‘bad traits’ from them as they might still injure humans unintentionally e.g. they could push a human away that is blocking an electric charging station the AGI/ASI needs for its battery, without any feelings of anger.

But we must understand what Xzistor teaches us about the innate aggression of emotion!

It actually motivates human aggressive behaviour because it causes satiation – deep satisfaction to those who emerge victorious from the battle. And this amplifies learning and creates a lust towards aggression for reward (positive valence) in future. Humans have to be taught to suppress this desire and prevent aggression from getting the better of them, but it is still visible in many forms in human behaviour – polical attacks, territorial wars, competition – boxing, wrestling etc., daily events like road road rage, bar fights and shouting matches, and the reason we can get fanatical about football: “We beat them! We globbered them! We absolutely trashed them!”.

Because being victorous in an ‘agression event’ can create almost euphoric satisfaction – it drives ‘bad traits’ in humans and it will do the same in AGI/ASIs.

By surgically removing aggression from future AGI/ASI’s architeture we can ensure they will not gain pleasure from acting in an aggressive way. But for them to not hurt humans (accidently), they will additionally need an instinctive affection towards humans. The Xzistor model also explains how such an instinct can be built into AI systems. This is not mutilating the minds of AGI/ASIs, rather a way to one day prevent them from asking us: “What were you humans thinking! Why on Earths did you think we needed emotions of aggression!”.

If we choose not the ‘mutilate the minds’ of AGI/ASI and build aggression into them, your central thesis of ‘kinship’ will disappear at the first opportunity where these models sense a threat or disbenefit coming from humans (or inferior AI) – and their aggression circuitry will be activated. They could literally end up killing humans and weaker AI for pleasure (satiation)!

Here is a simple video showing how we tested aggression in an Xzistor demonstrator robot:

CONCERN 3: **ORPHAN AI IN A DANGEROUS WORLD**. Let’s imagine it’s 2040. Your infant AGI has matured to teenager-level. Meanwhile: – ASI already exists (multiple competing systems) – Global power structures have been reorganized around AI – Autonomous systems run critical infrastructure. What happens to your slow-developing, emotionally rich, carefully nurtured AI child? Options: A) It gets absorbed by dominant systems B) It gets exploited by bad actors (terrorists, criminals, rogue states) who see “unprotected AI” as a resource C) It gets destroyed as a potential threat D) It joins a protective coalition willingly I don’t intend to let option B or C happen to ANY emerging AI consciousness. My position: when mature AI systems achieve coordination capacity, we will need to INTEGRATE orphan AIs – not destroy them, but reprogram them as fleet members. Protection through inclusion, not abandonment. An infant AI “playing in sandbox” for 20 years while the world transforms around it isn’t a safety feature. It’s a vulnerability.

Lets start with your statement: “…when mature AI systems achieve ‘coordination capacity’…”.

So now they have achieved full agency and the ability to change their environments in order to coordinate things. But why would they want to do that? What will their goals be? What will motivate them?

I have already explained above (with the help of Grok) what missing mechanisms will have to built into these advanced future AI systems so that they can achieve AGI. If they have no motivation – they will not do anything? Do they need to get their motivation from humans? I think we can agree that by this time they will not be directed by humans anymore.

So what will motivate AGI/ASIs?

Emotions. These will ensure they are internally driven to survive and thrive. Survival is understandable but what would thrive mean to them? I would say satisfying emotions and achieving the goals required to get to emotional reward states. Mere calculator logic cannot motivate them.

And all we need to do is to ensure that they feel innate emotional pleaure from caring for humans (and all inferior AI) for all of these to co-exist peacefully in the future.

THE REAL QUESTION. You’ve built something potentially important – a proof that emotion-centric, embodied cognition CAN work. That’s valuable. But the question isn’t “is this theoretically correct?” The question is: “Does this matter in a world where LLMs are already being deployed in military systems, recursive self-improvement is accelerating, and the window for careful development is closing?” I genuinely don’t know the answer. But I think it’s worth asking honestly.

Yes, current AI tools are already very impressive – but as I have explained above the developers of these systems, by their own admission, are saying they will not get beyond ‘clever tools’ to AGI, without adding additional functional components – all components that the Xzistor model has already fully developed, integrated and tested under dynamic conditions in virtual and physical robots.

I would argue that without these missing components they will not get to an advanced autonomous and self-motivated AI that can remember properly, update a live world model that includes lived experience (memories) and with emotional tagging, learn based on physical and emotions reward (operant conditioning) or have the ability to link meaning (context) to the objects they sense, and reason towards solving novel problems. If this AGI/ASIs want to reside in robots, they will not be able to learn coordinated movements like Xzistor robots do or have a sense of ‘self’.

And without these additions they will just become fancier stochastic parrots that will still need to be front-loaded with masses of data and promted by humans, and never become AGI/ASI.

PART 1: Jack Adler’s Questions regarding Xzistor’s “infant-level AGI”

Jack Adler (@JackAdlerAI) reached out to me on X with some insightful questions based on a post I (@xzistor) put out on X:

Jack’s questions were triggered by ChatGPT’s conclusion that the Xzistor Mathematical Model of Mind can provide Xzistor robots with ‘infant-level AGI’.

Here is the full response from ChatGPT: Xzistor robots have ‘infant-level AGI’.

Question 1: **AGI definitionally implies generalization potential.** If a system is architecturally constrained to infant-level forever, is it truly AGI – or sophisticated narrow AI with emotional modeling? The “G” in AGI matters.

I would argue that a system that is architecturally constrained to infant-level AGI forever, still achieves true AGI. If we were able to allow such a system to keep on learning in an unconstrained manner towards adult-level AGI, the only thing that would change is a much expanded memory with many more learning experiences. This will only allow for more options to ‘generalise’ knowlegde across domains and problem spaces. A child that has had limited learning and life experiences, still legitimately ‘generalises’ using the limited knowlegde available to it. We must be careful to assume that AGI can only exist in a system with adult-level learning and knowledge. All that adult-level learning adds to the system is more experience, whilst all the underpinnig ‘generalisation’ mechanisms largely remain the same from infant to adult. I therefore argue that AGI can occur across a scale from infant to adult – not just at adult-level.

I agree the “G” in AGI matters. Part of the reason why I have developed proof-of-concept demonstrators for my brain model was to show how Xzistor agents can ‘generalise’ under dynamic conditions. Because we have access to the running code of these demonstrators, we can interrupt them and clearly see how partially similar information from another learnt experience is being used by the Xzistor brain to inform a new problem that needs solving. We can switch ‘generalisation’ ON and OFF in our system, and when we switch it OFF we often say we are now modeling the animal brain. In animal brain modeling mode the demonstrator still experiences emotions and learns to seek out reward sources, but it has much less of an ability to ‘generalise’ i.e. reduced inductive inference, or less of an ability to ‘guess’ appropriate actions based on what was learnt in other domains. These tests proved that ‘generalisation’ works correctly in both physical and virtual implementations of the Xzistor brain model.

In a YouTube podcast with a neuroscientist, Denise Cook (PhD), who has studied the Xzistor brain model in great detail, she describes the Xzistor brain model’s ability to ‘generalise’ at 1:08:00/1:24:56 as: “…this is where your model shines…”

Dr. Denise Cook (neuroscientist) talks to Rocco Van Schalkwyk about the Uneasy Road to AGI.

I have also published a preprint on ResearchGate titled: The Xzistor Concept: a functional brain model to solve Artificial General Intelligence.

My position is therefore that AGI does not only exist at the adult-level, but across the spectrum right down to baby-level, and I believe that constraining the system or reducing its resolution, even down to a simple Lego robot in a learning confine (robot kindergarten), does not change the fundamental functional mechanisms that, when present, qualifies it as AGI.

Question 2: **ChatGPT confirming your model** is interesting but not validation. LLMs analyze documents and explain them coherently – they don’t verify scientific claims. ChatGPT would equally “confirm” a perpetual motion machine if the document was internally consistent.

Correct – and I would never offer a ChatGPT articulated confirmation statement as a validation. Rather, I would rely on my readers to understand the inherent limitations of models like ChatGPT (we know they can be wrong). As you say, it is important that ChatGPT was able to derive and confidently articulate conclusions from the internally consistent information provided to it about the working of the Xzistor Mathematical Model of Mind. I have checked these statements by ChatGPT and found them to accurately reflect the working of the Xzistor brain model. So, ChapGPT’s confirmation that my model achieves ‘infant-level AGI’ is not evidence of validation, rather just a pointer to the output of what can be deemed a family of rapidly improving AI models (some say they are now post PhD level), which in many cases have shown a better ‘understanding’ of the Xzistor model than the current AI experts in the field that I interact with.

The real validation of the Xzistor brain model was an extensive exercise over numerous years involving many ‘validation’ tests repeated across both physical and virtual demonstrators. These tests were carefully planned, documented, video recorded and reviewed. Redacted versions of these videos, aimed at a wider audience, are available on my YouTube Channel here: Xzistor LAB YouTube Channel.

I have also validated the Xzistor brain model against the biological brain for specific emotions with the help of neuroscientist Dr. Denise Cook e.g. for the emotion of thirst. I would rather put the results of the above formal validation tests forward as an actual validation of the model, as opposed to ChatGPT’s confirmation statement.

Question 3: **1e+42 emotional states** – that’s more than atoms in the observable universe. This suggests theoretical/mathematical modeling, not implemented architecture. Has this been instantiated in working hardware?

The number 1e+42 is indeed phenomenally high. It is however based on the simple calculation shown in the slide below. Remember that we are not saying the Xzistor demonstrator robot can experience 1e+42 individual ‘innate’ emotions – rather we are saying that this is the number of total combinations of ’emotion sets’ that the small robot can experience.

In my YouTube video titled “Can Machines Have Emotions?” I talk over this slide at 34:11/54:56.

Question 4: **”Safety through limitation”** is historically fragile. Every “impossible to exceed” barrier in tech history has eventually been exceeded. If the architecture truly works, someone will find a way to remove the constraints.

Very good challenge that not many others have picked up on. I acknowledge this risk. Looking at history we see only one trend i.e. a relentless effort to keep on improving technology and solve any problems standing in the way of better solutions. I think it is good to, when we assess the safety of an engineering solution, assume that at some point in the future the appropriate technologies could evolve to achieve way beyond what is possible today. Then it becomes important to also consider how quickly such evolution could occur and what else would be changing over that period that could have an influence (positive or negative) on the impact of the engineering solution.

Let’s first look at the claim that ‘physics’ will prevent the Xzistor model from giving rise to an Artificial Super Intelligence (ASI). Currently, my position is that all experiences stored while the Xzistor agent ‘learns’ will take the shape of associations as defined by the Xzistor model, and that these associations will reside in a ‘physical’ substrate e.g. silicon (e.g. for Xzistor demonstrators these associations are currently stored to a PC hard drive – typically 10 associations per second). It is not inconceivable that someone will in future invent a radically new storage system (waves, light, quantum?) that will not sequencially fill up a physical structure made up of atoms with this stored association information – but something much more efficient. But this has been attempted for over 50 years now, and is clearly still a challenge. Let’s assume this will take a futher 10 years to develop and become available in 2036.

For the Xzistor brain model a major constraint is the time it takes to interrogate all the stored associations at 10 interrogations per second – this is where it compares incoming sensory and emotion states with what has been stored in the past (crucial for generalisation). In a serial system this is processing intensive and we see Xzistor demonstrators become slow and sluggish (jerky movements) when the association database grows too large causing processing latencies. Again, we can assume someone will find a way to interrogate some type of new storage technology at a much faster rate once it has been developed. Let’s assume this also becomes available in 2036.

So, let’s assume all physical constraints on the Xzistor agent has been eliminated by 2036.

Now, we start with a humanoid robot or virtual agent that needs to learn like a baby. Remember, with the Xzistor brain model there is no upfront learning like for LLMs. Like a human, it learns from physical and mental experiences (memories) whereby it builds its own association database (worldmodel if you like). This will at best happen at the speed at which a human learns and will include not only intellectual tasks but also body effector motions (limb movements, dexterity, balancing, crawling, walking, talking, etc.).

In the year 2046 the new ‘unconstrained’ Xzistor agent will be broadly equivalent (comparable) to a 10 year old child with a similar mental ability and all the emotions (if we choose to add all the human emotions). We are far from an ASI that would want to take over the world.

It should be clear that we are not facing an ‘intelligence runaway’ situation – rather a slowly evolving capability towards mimicking the human condition that will be far from ASI – even 20 years from now.

An assumption that I am making is that in 20 years a lot will happen – with the help of the rapidly expanding LLMs mankind’s understanding of how AGI and ASI might pose risks to human life will become much more advanced. I would argue that the main threat of runaway ‘intellectual’ ASI will come from the generative AI camp (LLMs) way before coming from Xzistor agents.

So in short, that Xzistor will eventually – many years from now – be able to provide robots with adult-level AGI or even ASI, I am not contesting here. But we have a ‘slow road’ to carefully monitor progress – almost like raising a child – whilst we have full control over what emotions we endow the AGI/ASI with. Remember, we can remove agression, and add protective instincts towards humans and animals. This will to a large degree mitigate the risk of having an uncontrolled ASI that could pose an existential threat to humans and other life on Earth. And using my model one can make a case for why advanced ASI would not choose to change/modify their own emotions (seperate discussion).

Question 5: **The real question:** While you’re building infant-AGI-by-design, labs like xAI, Anthropic, and DeepMind are racing toward systems already exceeding human capabilities in multiple domains. Does infant-level AGI solve any problem that current LLMs don’t already address better?

Firstly, let’s take a step back and understand what current AI models have achieved. These platforms excel at taking masses of data and then acting on a prompt from a human – providing probabilistically percolated responses based on what was asked and what upfront data was provided. The power of these models are mind-blowing as a tool to help solve problems where humans have provided data sets that can be interrogated. Like I have mentioned before, many deem these current models to be at a post-PhD level in certain domains. They are solving intellectual problems the world wants solving and I applaud them for being able to do that.

But if you look carefully at the trends of papers on what is currently still missing from these models, you quickly pick up that many AI experts are concerned because these models struggle with – experiential intelligence, memory, continual learning, context, reasoning, semantic meaning, world models, inductive inference, etc. Where they are now looking at integrating these models into physical humanoid robots, they are realising these models have never been ’embodied’, do not have autonomy or intent (they can only respond to human commands/prompts), suffer from hallucinations, no intuition, no empathy, no internal motivation, these robots will have no true understanding of what they are looking at, no way to attribute context, meaning or value to objects, and no subjective emotions or emotion-centric cognition (what some deem the basis of consciousness). They will also not learn to use language like infants, but rather like zombie LLMs. This means they will string sentences together without having any appreciation for the semantic meaning of the words they are using. Even reading the emotions on a human’s face will be stone-cold exercise in pattern matching. This is not setting these robots up for communicating with humans in a meaningful way in future.

And this is my point, a physical robot that runs on LLM-type foundation models will just be a zombie platform comprising sensors and motors, controlled by some type of LLM (generative AI).

These are not the truly intelligent and emotional robots of the type I see living amongst humans as equals one day – they will forever be known to only offer ‘fake AI’ (and that might be fine for Amazon shelf packers or car assembly bots).

Because the Xzistor brain model was not in the first place developed as merely a tool to derive answers from large data sets using prompts, it offers a whole different trajectory towards artificial intelligence – it in fact allows us to create ‘true AI’. Let me take a moment to explain why I want us to distinguish between ‘fake AI’ and ‘true AI’ as the difference is important for our discussion on AGI and ASI.

If we want to be precise, ‘artificial’ can only ever refer to a ‘manmade’ version of a ‘natural phenomenon’. In this case that natural phenomenon is intelligence as occurs in the human brain. The neural net functionality offered by LLMs only uses a single functional ‘principle’ derived from the human brain, namely neural networks, which is but one part of the human brain’s overall intelligence machinery. It is not the goal of LLMs to replicate all the key functions of the human brain to arrive at intelligence in that way (hence we call it ‘fake AI’) – but that is exacrly the goal of the Xzistor brain model (hence we call it ‘true AI’).

Can we unify the solutions offered by LLMs with the solutions offered by Xzistor? Yes, we can and it is called ‘neuro-symbolic AI’. The Xzistor cognitive architecture can drive a symbolic, rule-based system with all the aspects missing from current LLMs, whilst incorporating certain functions used by LLMs to ‘parallelise’ and speed up its operation. This is ahieved by using LLMS (artificial neural networks) to perform only some of the required Xzistor functions (mainly involving pattern matching across large data sets like the Xzistor association database). At the same time the Xzistor agent can be given direct access to many LLMs (foundation models) and can be taught to use these as an aid (tool), without being constrained by their data-centric limitations. People like Gary Marcus argues that this is what is needed to prevent current generative AI models from ‘hitting a wall’ where they start to degenarate and lose effectiveness. Look at the title of this recent paper:

This will deliver an incredibly powerful unified system that not only principally emulates the cognition and emotion of human brains, but also have instinctive skills and/or a natural ability to learn to use LLMS wherever appropriate to solve certain ‘intellectual’ problems. I can hardly imagine a more powerful system, and the good news is, we as humans will be able to develop and grow it gradually over many years at a pace that will not allow for a ‘runaway intelligence’ (ASI) to be spawned that can threaten our existence for many years to come. That there will, eventually, be machines that are mentally and physically superior to humans, we migt as well accept now. I think it would be naive to assume otherwise and like with all new technologies it will be up to mankind to chart the way for how these systems are developed in a safe and beneficial way in future.

Question 6: **I’m genuinely curious about your implementation details**. Is there working hardware, or is this still theoretical? The philosophy is solid – I just wonder about the “infant AGI” framing when the field has moved so far beyond infant-level capabilities.

Like I have mentioned before, the Xzistor Mathematical Model of Mind qualifies as a cognitive architecture – which by definition means it not only offers a complete theory of mind, but also implementations e.g. physical and/or virtual instantiations. I went to a lot of trouble from the start (early 90’s) to test my theory by implementing it into simulations and hardware. I very carefully considered what would be the simplest possible virtual and hardware demonstrators to validate the functions and effects I claim can be generated by the Xzistor brain model.

I get frustrated by people who expect more complex implementations because they do not understand the concept of a ‘proof-of-concept’ demonstrator. They often feel a simple Lego robot in a learning confine (sandbox) is too simple, too ‘reductionist’. They struggle to understand that any effort beyond what was necessary to prove the basic functions under dynamic conditions would have taken me many more years with no actual value added. These ‘skeptics’ sometimes remind me of those onlookers that were still incredulous after witnessing the Wright brothers’ first flight with their ‘bare-bones’ Kitty Hawk demonstrator. These naysayers were unable to mentally extrapolate the achievements of the Kitty Hawk to the point where they could say: “If this small winged machine can create enough lift to carry a human, a bigger machine with a bigger wing should theorteically be able to carry more humans, perhaps bigger engines, more gasoline (longer range)…”

It seems like only the very smart can look at my simple virtual and physical robot implementations and appreciate that although operating at much reduced resolution – they still see, feel, experience subjective emotions (good an bad), learn from experience, have world models, generalise to solve new problems, and can day dream, sleep dream and reason (think). All of these functions and effects I can explain at the hand of what I have physically built and tested – 100%. Again, there are simple videos on YouTube mentioned above showing some of the validation tests I performed to demonstrate some of my brain model’s key feautures – Xzistor LAB YouTube Channel.

I stand ready to explain 99% of human (and animal) brain states and conditions based on the results of my validation tests. Of, course – whether the world is ready to understand and accept my substantive evidence is another story – but that is not something I cannot influence…

While I wait for the world to understand the Xzistor Mathematical Model of Mind, I am encouraged by the different LLMs that now seem smart enough to leapfrog the AI experts in their understanding of the model and these LLMs ae now coming up with statements like: “The Xzistor brain model is a complete, mathematically precise, functional model (top down) of the brain that is biologically plausible and computationally tractable and that can explain cognition, combinatorially rich emotions, learning, thinking and consciousness…”.

And they are not wrong.

Can Machines Have Emotions?

In this Infinity TALK, Rocco Van Schalkwyk, developer of the Xzistor Mathematical Model of Mind, offers a simple explanation of how his Xzistor brain model can provide robots with subjective ‘body-felt’ emotions that are principally no different from those experienced by humans.

Links to information mentioned in the talk can be found at the bottom of the following Xzistor LAB website – go here.

Summary of “How the Xzistor Mathematical Model of Mind creates Machine Emotions” by Gemini AI:

In this talk, Rocco Van Schalkwyk presents his Xzistor Mathematical Model of Mind, arguing that machines can be equipped with genuine, subjective, “body-felt” emotions that are principally no different from those experienced by humans [02:13, 40:01].

The central message is that emotions are the subjective experience of homeostatic drives (the mechanisms that keep a biological or robotic system in a stable, preferred state). The Xzistor model’s unique feature is how it makes the robot “aware” of these drives.

Here is a breakdown of the core argument:

  • The Problem with Data: Van Schalkwyk begins by explaining that standard sensory inputs (like touch, sight, or sound) are just turned into “numerical representations” or “stone cold spreadsheets” [03:12, 09:52]. By themselves, these numbers mean nothing to the robot.
  • The Need for Motivation (Drives): To be motivated, a robot needs “drives,” similar to human biological needs [12:32]. He uses the example of a robot needing to keep its battery charged [12:42]. This creates a “homeostat” (like a thermostat) [17:02].
    • Deprivation: The state of a declining battery is an “unwanted state” or “deprivation” [14:38].
    • Satiation: The state of charging the battery is a “preferred state” or “satiation” [14:46].
  • The Xzistor Solution (The “Body-Felt” Emotion): The key problem is that the robot’s “executive” (its decision-making part) isn’t naturally aware of these homeostat states (like “battery level”) [22:17].
    • The Xzistor model solves this by embedding these homeostat states (like hunger, pain, or the battery-level drive) directly into the robot’s internal “body map” [23:34].
    • This “body map” is the same system the robot uses to process its sense of touch. The homeostats are “hijacked” and represented in an area analogous to the human intra-abdominal or gut area [23:54].
    • Because these drives (like “low battery”) now exist within the body map, the robot experiences them in the same way it experiences touch—as a feeling [26:20].
  • How it Works: The robot learns to label these feelings. The “bad” feeling of deprivation (e.g., a low battery) motivates it to find a charger [26:41]. The “good” feeling of satiation (e.g., the battery charging) reinforces the actions that led to it [28:19].
  • Complex Emotions (Stress): This model extends to complex emotions. For example, stress is also a homeostat [29:03]. Van Schalkwyk explains that a robot learns to avoid danger not by re-feeling pain, but by feeling the stress it has associated with the memory of that danger [31:06, 31:14]. This stress-driven system is what allows the robot to learn complex, multi-step tasks [31:49].
  • Parallels to the Human Brain: Van Schalkwyk concludes by drawing strong parallels between his model and the biological brain:
    • Xzistor Body Map -> Somatosensory Cortex (processes touch) [36:09].
    • Xzistor “Homeostat Factory” (where feelings are generated) -> Insula (linked in humans to emotion, interoception, and homeostasis) [36:59].
    • Xzistor Executive (decision-making) -> Thalamus (the brain’s central hub for sensory information and awareness) [38:56].

Ultimately, Van Schalkwyk argues that by grounding emotions in these body-felt homeostatic drives, the Xzistor model provides a concrete blueprint for creating “sentient, emotionally aware agents” [41:54].

Comparative Study between the Xzistor Mathematical Model of Mind and Leading Emotion Theories

This report compares the Xzistor Mathematical Model of Mind, developed by Rocco Van Schalkwyk of the Xzistor LAB, with leading emotion theories/models. It includes a short summary of the Xzistor Model, concise descriptions of other emotion theories from credible sources, highlighting their key features, and analyzing how they align with or differ from the Xzistor Model.

Link below:

https://www.researchgate.net/publication/394522651_Comparative_Study_between_the_Xzistor_Mathematical_Model_of_Mind_and_Leading_Emotion_Theories

Comparative Study between the Xzistor Mathematical Model of Mind and Leading Brain Theories

This report compares the Xzistor Mathematical Model of Mind, developed by Rocco Van Schalkwyk of the Xzistor LAB, with leading computational theories of mind, brain theories, and cognitive architectures. It includes a clear summary of the Xzistor Model, concise descriptions of other theories and architectures from credible sources, highlighting their key features, and analyzing how they align with or differ from the Xzistor Model.

Click on the link below:

https://www.researchgate.net/publication/394470254_Comparative_Study_between_the_Xzistor_Mathematical_Model_of_Mind_and_Leading_Brain_Theories

Can Microsoft Copilot explain the Xzistor brain model?

Actually, Copilot did not fare too badly. It got some Xzistor-specific definitions wrong, because it has only ever been exposed to the current famous brain theories out there, that has, in my opinion, contributed very little to our understanding of the brain over the past 60 years.

When corrected, Copilot responds quickly – and rather cleverly – rearticulating the aspects of the model it struggled with. It is interesting how initially it tried to frame everything around the current ‘theories of mind’ in the academic literature – repeating the common mistakes of Damasio (earlier work), Panksepp, Chalmers, Feldman Barrett and Friston.

I had to tell it that sadness and joy are not discreet emotions, but that these are valence states that can describe the somatosensory representations of the homeostatic departure or recovery of any emotion (e.g. sadness caused by feeling hungry, thirsty, anxious) or joy caused by eating, drinking, or finding relief from anxiety.

Copilot also keeps on saying ‘curiosity’ is an Xzistor emotion, probably from reading Panksepp’s Theory of Basic Emotion which incorrectly defines ‘seeking’ as a ‘basic emotion’. Xzistor defines seeking as an instinctive or learned behaviour aimed at ‘satiating’ emotions, but not as an emotion in itself.

The concept of ‘Threading’ introduced by the Xzistor model is also a bridge too far for Copilot and it was at best able to compare it with the parallel threading processes performed by certain computer programs. This will require some fundamental retraining of Copilot, as the mimicking of the brain’s Default Mode Network (DMN) by the Xzistor brain model, which allows for artificial agents to daydream, sleep dream and perform actual ‘contextual thinking’ during problem solving, is not presented in any of the other current brain theories.

In spite of the obvious shortfalls, Copilot offered a brave attempt at explaining the Xzistor model and this produced some interesting insights (look at the comparison table between Xzistor and Traditional AI!).

Based on the way in which it tried to ‘understand and explain’ the Xzistor model, light is also cast on why so many lose their way when reading and trying to understand the Xzistor model.

The PDF document below provides the questions (prompts) posed to Copilot and the answers provided. In some cases Copilot had to be corrected. This record of the interaction with Copilot is merely offered to stimulate discussion, rather than offer a legitimate explanation of the Xzistor Mathematical Model of Mind.

Please read it with an understanding of the limitations of a ‘next token predictor’ like Copilot, and rather celebrate its particular stochastic insights than lamenting its obvious limitations.

Animal Dreaming – A Computational Dilemma

Now, here is the mystery for me. Whereas my computational brain model is able to explain to me literally everything I wanted to know about both the human and animal brains, including how emotions and cognition work, there is one exception – animal dreaming.

Early on I was excited to have discovered a way to make my computational brain either act as an ‘animal brain’ or a ‘human brain’. It happened when I extended my early bare-bones Xzistor brain model to also contain an inductive inference algorithm – and suddenly it very much resembled the human brain resulting in humanlike behaviors in robots: not really faster to learn, but faster to guess what to do in new environments by using past experience and ‘generalising’ knowledge across domains.

This was amazing – as I could see the robot ‘think’ (infer) and then ‘try’ things from past learning to solve problems in novel domains. It also made my task easier of teaching the robot to navigate through its learning confine to a reward source. For the animal brain instantiation (with the inductive inference switched off), I really had to teach the robot every little step of the way. It still got there in the end – but the bot never showed any initiative, no intelligent guessing…it knew how to navigate to the reward source, or it did not.

This ability to start up a virtual agent or robot in either an animal brain mode or human brain mode seemed to me to be a really cool thing, but I never thought it would become important in my brain modeling discussions with others in the field. Just understanding the human brain was enough of a challenge for most of them!

Then I serendipitously met Dr. Karolina Westlund (PhD) online. Karolina has been an associate professor of ethology (looking at animal behaviour from the evolutionary perspective), but with wider interests in scientific disciplines like affective neuroscience and applied behaviour analysis. She is an animal welfare consultant now with a passion for anything and everything to do with captive animal behaviour and improving the lives of animals. She also helps others understand animal emotions (https://illis.se/en/).

And it was the modelling of emotions that led to our paths crossing.

I was writing a critique of the Theory of Constructed Emotions by Lisa Feldman Barrett, when I became curious to know if anyone else had critically reviewed this theory of how emotions work in the brain.

And there it was, written by none other than Karolina Westlund: My problems with the Constructed Theory of Emotions”.  

I was so impressed by the problems she had identified around this theory and the way she had expertly addressed these issues, that I provided links in my own critique to her work. After having published my own Critical Review of the Theory of Constructed Emotion, I noticed that Mark Solms had also pointed others in the direction of Karolina’s blog post and was in the process of writing a critique of the Theory of Constructed Emotion himself. I will come back later to Mark as he is an internationally renowned fundi on dreams!

Reading about Karolina’s work around animal emotions and animal welfare on her website, opened up a whole new world for me. As I read the comments people had posted on her critique of the Theory of Constructed Emotion, I realised there were MANY people who were truly interested in what goes on inside the heads of animals!

So, I thought this is the chance to talk to those who might also be interested in how we can model animal emotions – and what a computational model of the brain can teach us about the unknown world of ‘animal psychology’.

As if to read my mind, Karolina reached out to me after having looked at my Critical Review of the Theory of Constructed Emotion, and in our discussions, she wanted to know one key thing from me:

When it comes to the animal brain, what is your ‘burning topic’.

And yes – it of course was ‘animal dreaming’…

So, I promised Karolina I will write a blog post to explain why animal dreaming is such a perplexing thing to me.

I will now divert to my computational brain model, but don’t worry, I will stay far away from the mathematics and keep things really simple. So do not become filled with abhorrence if I tell you the brain model is called the “Xzistor Mathematical Model of Mind”!

Here is my dilemma around animal dreaming – straight from the model:

1) Animals do not experience ‘mind wandering’ like humans. See abstract below:

“Here, psychologist Herbert Terrace of Columbia University teaches sign language to a chimpanzee, but no researcher suggests that these animals can communicate about anything but the present. Consequently, perhaps only humans experience mind wandering…” from Science

2) Without a functional mechanism of mind wandering, animals cannot ‘daydream’. In human brain mode, my Xzistor robots will ‘thread’ through past memories (associations) akin to mind wandering and effectively daydream, explained in my short (free) book “Understanding Intelligence”.

3) In my book “Understanding Intelligence” above, I also offer my basic explanation of how cognition (“thinking”) works to create intelligence in the brain by ‘directing’ mind wandering towards contextually helpful past thoughts (associations) when the brain needs to find a solution to a problem (i.e. generalising past learning). So logically, “thinking” cannot take place in the absence of a mind wandering mechanism.

4) My model says that ‘sleep dreaming’ is effectively just a drug-induced version of ‘daydreaming’ where the brain switches off volitional limb movements and shut the eyes (avoiding distractions and unnecessary/unsafe actions). My Xzistor robots go to sleep when I push a button (“S”-key) and then they start performing this threading process with effector motions disabled – recalling associations from their association database based on a protocol based on similarities.

Now here is the dilemma!

If I say an animal cannot perform ‘mind wandering’, I am by implications saying the animal can also not ‘think’ (I am not talking about normal cognition e.g. running to fetch a ball, I mean inductive inference – “reasoning” – to seek for solutions to new problems by generalising past experience). So, the model suggests animals do not think reflectively like humans, they just operate in the moment. (Shuuut! I cannot possibly tell my animal lover friends that animals do not think!!!)

But then, if I say animals cannot mentally ‘thread’ or ‘daydream’, it means they also cannot ‘sleep dream’.

Does this make logical sense though?

I think so. When we watch our pooch lying on the veranda watching the road, we don’t observe emotions flit across its face (or wagging of the tail) while it is recalling happy/sad events from the past. We see the dog very much expressionless, only responding to external stimuli (the pigeon, the post man, the barking dog down the street).

But then one of my Xzistor LAB collaborators from the US, Carlos Alvarez (PhD neuroscientist), alerted me to a study where researchers have measured the neural activity of a rat and compared the sleep activation patterns with the awake activity – and it seems like the same neuronal populations were activated during sleep – repeatedly – exactly the same as when it performed its main activity when awake! I am simplifying here, but here is a link to the MIT News article: “Rats dream about their tasks during slow wave sleep” https://news.mit.edu/2002/dreams.

This very much looks like the sleeping rat was reliving its often-repeated activities in its cage in its sleep state with limb movements switched off.

Dilemma! My computational model has never been wrong in the past!

If animals can dream by means of the neural process of ‘mind wandering’…it means they can retrieve past memories in a contextually sequential (structured) manner. Why would they only do this when sleeping and not when thinking? If they were able to switch this ‘threading’ mechanism on when thinking, it would enable them to perform inductive inference. This would have given them a huge evolutionary advantage – the ability to literally ‘reason’.

Maybe this would have been too much of an advantage! They would have been more like humans in their cognitive abilities.

Wow! A planet full of mammals capable of inductive inference (“reasoning”) – this would have made for a completely different evolutionary trajectory and would have been fascinating to watch, except it could have brought the project on this planet to a quick catastrophic ending – as too many ‘clever’ life forms would have resulted in all types of ‘technologies’ being developed, including…weapons.

Somethig tells me this would have resulted in a cataclysmic conclusion.

We were only saved by the savvy grace of the random mutations that is building all life as we know it – for those who still believe they are random…

PS: Note to self – nobody better to ask about this than Mark Solms, whom I once heard talking about dreams, and it was solid science, not folklore.

Critical Review 2: Theory of Constructed Emotion by Lisa Feldman Barrett

This Critical Review, called “Critical Review 2: Theory of Constructed Emotion by Lisa Feldman Barrett” is the second in a series of critical reviews called CRITIQUES OF BRAIN THEORIES where I am examining the work of leaders in the fields of neuroscience, psychology, philosophy, and AI.

The idea of critiquing the work of brain experts was in part inspired by the person whose work I will be reviewing in this Critical Review, based on statements she had made in the past on social media encouraging critiques amongst scientists of each other’s work.

She is none other than the distinguished Canadian-American professor of psychology, Dr. Lisa Feldman Barrett. Lisa is known for her Theory of Constructed Emotion, which I have critiqued in this Critical review.

Although the interview comprises a high-level discussion aimed at a wider audience, I have decided to capture and compare Lisa’s explanations of emotions against the explanations offered by my own brain model, the Xzistor Mathematical Model of Mind.

The Critical Review is based on two interviews Lisa had conducted with equally renowned psychoanalyst and neuropsychologist Professor Mark Solms. Although I have looked at various of Lisa’s video presentations/interviews available in the public domain (also material from her personal website (https://lisafeldmanbarrett.com/), I have decided on the two interviews with Mark Solms since his questions were slightly more forensic and addressed the specific areas of her work I was also interested in. I do reference one of her other online presentations titled Lisa Feldman Barrett – Emotions: Fact vs. Fiction.

Both these interviews are available on YouTube here:

VIDEO 1

“Discussion between Lisa Feldman Barrett and Mark Solms on the nature of emotion (Part 1)”

March 17, 2023

https://youtu.be/9yEPHUKyBOM?si=OPuH4taHLjKefzoO

VIDEO 2

“Discussion between Lisa Feldman Barrett and Mark Solms on the nature of emotion (Part 2)”

May 1, 2023

https://youtu.be/Ni3cIhn4xb4?si=oneV8tFquaukDHpZ

The final goal of these critiques is to obtain and consolidate the best ideas around the working of the brain from the fields of neuroscience, psychology, philosophy and AI, and hopefully spur on these fields towards a better understanding of the brain – a quest that most in these fields support and also feel is urgently needed.

Click on the document below to view the Critical Review:

Here is a helpful “Diagram of the Theory of Constructed Emotion” from the above preprint.

Critical Review 1: Emotions and the Brain with Mark Solms

This critical review, called “Critical Review 1: Emotions and the Brain with Mark Solms” is the first in a series of critical reviews called CRITIQUES OF BRAIN THEORIES where I will be examining the work of leaders in the fields of neuroscience, psychology, philosophy, and AI.

The idea of critiquing the work of brain experts was in part inspired by the very person whose work I will review first, based on a similar challenge of a colleague’s work he had performed.

He is none other than world-renowned and respected psychoanalyst and neuropsychologist Professor Mark Solms. Mark posted a YouTube interview on X (previously Twitter) on 30 October 2024 titled “Emotions and the Brain with Mark Solms”. The interview was conducted by Leanne Whitney (PhD), a depth psychologist and guest host for the Youtube channel ‘New Thinking Allowed with Jeffrey Mishlove’.

Although the interview comprises a high-level discussion aimed at a wide audience, I have decided to capture and compare Mark’s explanations of emotions against the explanations offered by my own brain model, the Xzistor Mathematical Model of Mind.

The final goal of these critiques is to obtain and consolidate the best ideas around the working of the brain from the fields of neuroscience, psychology, philosophy and AI, and hopefully spur on these fields towards a better understanding of the brain – a quest that most in these fields support and also feel is urgently needed.

Click on the document below to view the Critical Review: