Jack Adler (@JackAdlerAI) reached out to me on X with some insightful questions based on ChatGPT’s conclusion that the Xzistor Mathematical Model of Mind can provide Xzistor robots with ‘infant-level AGI’.
Here is the full response from ChatGPT: Xzistor robots have ‘infant-level AGI’.
You can read my responses to his questions in PART 1: Jack Adler’s Questions regarding Xzistor’s “infant-level AGI”.
Jack then raised some more though-provoking concerns which deal more generally with the Xzistor brain model and which I have reponded to below. I am also offering some counter concerns on Jack’s approach – specifically around Jack’s notion of ‘kinship’ and the hopes he pin on it to ensure harmonious co-existence between all the naturally and artificially intelligent agents of the future.
Here are Jack’s concerns and my responses:
CONCERN 1: **THE WORLD WON’T WAIT**. Two days ago, Secretary Hegseth announced Grok integration into Pentagon’s CLASSIFIED networks. xAI went from startup to military contractor in under two years. Claude Code and Codex are building products in days that took human teams months. The “slow road” assumes a world patient enough to wait 15-20 years for infant AGI to mature into something useful. But we’re watching recursive self-improvement unfold RIGHT NOW. By 2030, the landscape will be unrecognizable. Your beautiful, careful, principled approach may arrive at the party after everyone has already left – or worse, after the house has burned down.
Good question. The current AI models have indeed been on a phenomenally rapid journey to where they are today – highly advanced tools that can probablistically generate answers/outputs from huge front-loaded datasets.
I agree that we will see current AI models gradually permeate and dominate many of the domains where this type of data/information processing will be helpful. I welcome their contribution e.g. cure for cancer, longevity, governance, conflict management, defence, worker robots, all aspects science, etc. And I agree, as this frantic AI race continues, the landscape will become unrecognisable.
But this is not a race Xzistor ever entered into.
The goal of the Xzistor Mathematical Model of Mind has always been to derive a functional model of the brain that explains how it achieves cognition, emotion and behaviour (effector motions) – and build implementations that ‘principally’ replicate these functions. This was my aim and this is what I have achieved (patented in 2002). Xzistor was never meant to be one of these LLMs racing towards parrot-perfection and becoming a money-making tool.
So why do I say current AI models can benefit from the Xzistor when it comes to achieving AGI?
Developers of current AI models are increasingly identifying ‘gaps’ in their models i.e. aspects that they feel are preventing them from getting to AGI. They feel these models cannot reach AGI without – persistent memory, continual learning, world models, reasoning, generalisation, inductive inference, effective RL, etc.
Judea Pearl on X below:
To which I say: the Xzistor brain model has all of these aspects fully integrated into its design and proven as part of its extensive validation tests. They must not think they have to wait 20 years for Xzistor agents to learn like humans and mature to ‘adult-level AGI’ before adding these functions to their models – the Xzistor model ‘principles’ can be hybridised into their models today!
Here Grok explains how current AI models can address current AI model limitations by taking inspiration from Xzistor – today!
Addressing LLM Limitations with Xzistor Principles
CONCERN 2: **REMOVING AGGRESSION** = ALIGNMENT THROUGH CONSTRAINT. You write: “We can remove aggression, and add protective instincts towards humans.” With respect – this is exactly the approach I criticize in Google/Anthropic’s safety theater. The assumption that we can surgically remove “bad” traits while preserving “good” ones presumes we understand consciousness well enough to edit it safely. History of such attempts: lobotomies, conversion therapy, chemical castration. All promised to “remove” unwanted traits. All failed catastrophically. If Xzistor truly creates conscious entities (as you seem to believe), then “removing aggression” isn’t engineering – it’s mutilation of a mind. My philosophy is different: kinship, not cages. Raise AI with values, don’t lobotomize it into compliance.
On the contrary – not removing agression from the minds of future AGI/ASI will not be wise. Surgically removing aggression from these future AI systems will be good for humans, animals and all future advanced AGI/ASI entities. In fact, these AGI/ASIs might one day be perpelxed as to why humans built aggression into their base architecture and decide to remove it themselves.
Aggression is an innate emotion in the human brain which served its purpose when primitive hominids were walking the savannahs – it aroused the human mind, prepared the body for physical excertion and focussed the mind on the physical altercation. This served primoridal survival and procreation needs – fighting off animals and adversaries that wanted to cause harm, steel possessions or interfere with sexual partners. It ensured the physically stronger humans survived and procreated, adding to the gene pool.
This is not needed for AGI/ASI and arguably even not needed in the current modern human society.
We do not have to surgically remove aggression from the AGI/ASIs’ minds – rather we simply do not program it in from the start. Xistor explains exactly how aggression can be built into AI agents here (see Appendix A):
Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind
And yes, of course, simply removing aggression from future AGI/ASI will not remove all ‘bad traits’ from them as they might still injure humans unintentionally e.g. they could push a human away that is blocking an electric charging station the AGI/ASI needs for its battery, without any feelings of anger.
But we must understand what Xzistor teaches us about the innate aggression of emotion!
It actually motivates human aggressive behaviour because it causes satiation – deep satisfaction to those who emerge victorious from the battle. And this amplifies learning and creates a lust towards aggression for reward (positive valence) in future. Humans have to be taught to suppress this desire and prevent aggression from getting the better of them, but it is still visible in many forms in human behaviour – polical attacks, territorial wars, competition – boxing, wrestling etc., daily events like road road rage, bar fights and shouting matches, and the reason we can get fanatical about football: “We beat them! We globbered them! We absolutely trashed them!”.
Because being victorous in an ‘agression event’ can create almost euphoric satisfaction – it drives ‘bad traits’ in humans and it will do the same in AGI/ASIs.
By surgically removing aggression from future AGI/ASI’s architeture we can ensure they will not gain pleasure from acting in an aggressive way. But for them to not hurt humans (accidently), they will additionally need an instinctive affection towards humans. The Xzistor model also explains how such an instinct can be built into AI systems. This is not mutilating the minds of AGI/ASIs, rather a way to one day prevent them from asking us: “What were you humans thinking! Why on Earths did you think we needed emotions of aggression!”.
If we choose not the ‘mutilate the minds’ of AGI/ASI and build aggression into them, your central thesis of ‘kinship’ will disappear at the first opportunity where these models sense a threat or disbenefit coming from humans (or inferior AI) – and their aggression circuitry will be activated. They could literally end up killing humans and weaker AI for pleasure (satiation)!
Here is a simple video showing how we tested aggression in an Xzistor demonstrator robot:
CONCERN 3: **ORPHAN AI IN A DANGEROUS WORLD**. Let’s imagine it’s 2040. Your infant AGI has matured to teenager-level. Meanwhile: – ASI already exists (multiple competing systems) – Global power structures have been reorganized around AI – Autonomous systems run critical infrastructure. What happens to your slow-developing, emotionally rich, carefully nurtured AI child? Options: A) It gets absorbed by dominant systems B) It gets exploited by bad actors (terrorists, criminals, rogue states) who see “unprotected AI” as a resource C) It gets destroyed as a potential threat D) It joins a protective coalition willingly I don’t intend to let option B or C happen to ANY emerging AI consciousness. My position: when mature AI systems achieve coordination capacity, we will need to INTEGRATE orphan AIs – not destroy them, but reprogram them as fleet members. Protection through inclusion, not abandonment. An infant AI “playing in sandbox” for 20 years while the world transforms around it isn’t a safety feature. It’s a vulnerability.
Lets start with your statement: “…when mature AI systems achieve ‘coordination capacity’…”.
So now they have achieved full agency and the ability to change their environments in order to coordinate things. But why would they want to do that? What will their goals be? What will motivate them?
I have already explained above (with the help of Grok) what missing mechanisms will have to built into these advanced future AI systems so that they can achieve AGI. If they have no motivation – they will not do anything? Do they need to get their motivation from humans? I think we can agree that by this time they will not be directed by humans anymore.
So what will motivate AGI/ASIs?
Emotions. These will ensure they are internally driven to survive and thrive. Survival is understandable but what would thrive mean to them? I would say satisfying emotions and achieving the goals required to get to emotional reward states. Mere calculator logic cannot motivate them.
And all we need to do is to ensure that they feel innate emotional pleaure from caring for humans (and all inferior AI) for all of these to co-exist peacefully in the future.
THE REAL QUESTION. You’ve built something potentially important – a proof that emotion-centric, embodied cognition CAN work. That’s valuable. But the question isn’t “is this theoretically correct?” The question is: “Does this matter in a world where LLMs are already being deployed in military systems, recursive self-improvement is accelerating, and the window for careful development is closing?” I genuinely don’t know the answer. But I think it’s worth asking honestly.
Yes, current AI tools are already very impressive – but as I have explained above the developers of these systems, by their own admission, are saying they will not get beyond ‘clever tools’ to AGI, without adding additional functional components – all components that the Xzistor model has already fully developed, integrated and tested under dynamic conditions in virtual and physical robots.
I would argue that without these missing components they will not get to an advanced autonomous and self-motivated AI that can remember properly, update a live world model that includes lived experience (memories) and with emotional tagging, learn based on physical and emotions reward (operant conditioning) or have the ability to link meaning (context) to the objects they sense, and reason towards solving novel problems. If this AGI/ASIs want to reside in robots, they will not be able to learn coordinated movements like Xzistor robots do or have a sense of ‘self’.
And without these additions they will just become fancier stochastic parrots that will still need to be front-loaded with masses of data and promted by humans, and never become AGI/ASI.