Can Microsoft Copilot explain the Xzistor brain model?

Actually, Copilot did not fare too badly. It got some Xzistor-specific definitions wrong, because it has only ever been exposed to the current famous brain theories out there, that has, in my opinion, contributed very little to our understanding of the brain over the past 60 years.

When corrected, Copilot responds quickly – and rather cleverly – rearticulating the aspects of the model it struggled with. It is interesting how initially it tried to frame everything around the current ‘theories of mind’ in the academic literature – repeating the common mistakes of Damasio (earlier work), Panksepp, Chalmers, Feldman Barrett and Friston.

I had to tell it that sadness and joy are not discreet emotions, but that these are valence states that can describe the somatosensory representations of the homeostatic departure or recovery of any emotion (e.g. sadness caused by feeling hungry, thirsty, anxious) or joy caused by eating, drinking, or finding relief from anxiety.

Copilot also keeps on saying ‘curiosity’ is an Xzistor emotion, probably from reading Panksepp’s Theory of Basic Emotion which incorrectly defines ‘seeking’ as a ‘basic emotion’. Xzistor defines seeking as an instinctive or learned behaviour aimed at ‘satiating’ emotions, but not as an emotion in itself.

The concept of ‘Threading’ introduced by the Xzistor model is also a bridge too far for Copilot and it was at best able to compare it with the parallel threading processes performed by certain computer programs. This will require some fundamental retraining of Copilot, as the mimicking of the brain’s Default Mode Network (DMN) by the Xzistor brain model, which allows for artificial agents to daydream, sleep dream and perform actual ‘contextual thinking’ during problem solving, is not presented in any of the other current brain theories.

In spite of the obvious shortfalls, Copilot offered a brave attempt at explaining the Xzistor model and this produced some interesting insights (look at the comparison table between Xzistor and Traditional AI!).

Based on the way in which it tried to ‘understand and explain’ the Xzistor model, light is also cast on why so many lose their way when reading and trying to understand the Xzistor model.

The PDF document below provides the questions (prompts) posed to Copilot and the answers provided. In some cases Copilot had to be corrected. This record of the interaction with Copilot is merely offered to stimulate discussion, rather than offer a legitimate explanation of the Xzistor Mathematical Model of Mind.

Please read it with an understanding of the limitations of a ‘next token predictor’ like Copilot, and rather celebrate its particular stochastic insights than lamenting its obvious limitations.