ANKI Vector Robot with Xzistor Concept Artificial Brain

SIMPLE DEMONSTRATION OF SUBJECTIVE EMOTIONS AND INTELLIGENCE

A short assessment was performed to establish the suitability of the ANKI Vector Robot to
act as a simple robotics platform for students to investigate the concepts of agency and
autonomy.
A complete, but substantially scaled down, version of the Xzistor Concept logic schema is
proposed to provide the robot with simple subjective emotions and intelligence within a
learning confine.

See the short assessment report below.

I identify as a Large Language Model – do not call me AGI!

My response to Gary Marcus tracking the evolution of large language models.

Agree with you, Gary. Can’t believe some of our colleagues could even have doubts.

If we just define AGI for the moment as: The hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can.

Assuming AGI is based on a mature adult – here is a question one can ask AGI when it has ‘arrived’ or like they say on ‘AGI Game Over Day’:

On ‘AGI Game Over Day’, my question to AGI: “Based on your personal experience, AGI, which aspect of an intimate relationship would you say is the most important for ultimate happiness – physical appearance, emotional connection or cultural background?”

This is a question your average adult will be able to answer from a personal perspective (it is specifically aimed at lifting out some of the key challenges to AGI).

I am putting a cognitive architecture on the table (Xzistor Concept) – the type of model many say is needed to ‘encompass’ LLMs. And the truth is, the LLM will be a small ‘handle-turner’ within the scope of the overall cognitive model. The model actually patiently anticipates errors from the LLM and will let it learn from these errors. Remember to think like humans we need reflexes, emotions, curiosity, reasoning, context, fears, anxiety, fatigue, pain, love, senses, dreams, creativity, etc. – without these every answer given by AGI will start like this: “Well, I have not personally experienced X but from watching 384772222 Netflix movies at double speed I belief X is a bad thing… “

Keep it up Gary – the science community owes the truth to the public!

MRI results corroborate Xzistor Model definitions of Intelligence and Emotions

A real exciting find!

By pure accident, I recently stumbled over an MRI study that was performed by researchers at Duke University’s Center for Cognitive Neuroscience in 2016.

I could not have asked for a more appropriate set of MRI tests to corroborate my functional brain model, called the Xzistor Concept brain model.

Professor Kevin LaBar, study co-author and head of the university’s neuroscience program made some comments to CNN reporter Ashley Strickland at the time, which immediately captured my interest. I realized the significance of simple statements like ‘asked study participants to rest and think about nothing’ and ‘letting their minds wonder’.

Those words might not mean much to AI researchers trying to simulate the mind and get artificial intelligence to the next level – but for me his words had special meaning. I knew he was talking about one of the most fundamental tenets of my brain model.

It was incredibly important for me to understand what this MRI study of emotions in the brain had found.

Letting the mind ‘wonder’ or ‘freewheel’ does not only form the point of departure for thought and problem-solving performed by my model, but also explains how emotions and intelligence are integrated within the human mind.

But first let’s take a look at what Professor LaBar and his team had found:

In short, this is what they did: They encouraged study participants to enter an MRI scanner and then ‘try to think about nothing’ while they were in the machine. As this was happening, they recorded the brain patterns (in color) associated with the unforced emotions spontaneously generated in the minds of the participants. These they compared with a set of ‘reference color patterns’. The reference emotion patterns were collected earlier from other individuals which were asked to watch movies or listen to music. The research team defined a few basic color patterns prevalent during specific emotions e.g. surprise, fear, anger, sadness, amusement and neutral. Although the participants were trying to think of nothing specific, different emotions seemed to come and go in their minds while in the scanner, separated by periods of what looked like an emotionally ‘neutral’ state.

Fig 1. Distributed patterns of brain activity predict the experience of discrete emotions.

Parametric maps indicate brain regions in which increased fMRI signal informs the classification of emotional states. Image taken from original research article Decoding Spontaneous Emotional States in the Human Brain
(https://doi.org/10.1371/journal.pbio.2000106.g001).
[Credit: Kragel PA, Knodt AR, Hariri AR, LaBar KS (2016)

The conclusion from the investigation as explained in the CNN interview by LaBar is important – he believes that, by using such MRI brain scans to show emotions in colour, a comparison can be made to see if a treatment regime had changed a patients ‘rest state’ emotional signature. For instance when a patient who has been suffering from depression reports an improvement, the current scans can be compared with previous scans to remove possible biases and uncertainties for the patient – and to inform further treatment options. It could further assist with assessing children with mental disorders or even assess patients in comas.

Why is this so important for AI research?

What LaBar and his colleagues have found is accurately explained by my functional (mathematical) model of the brain.

What they choose to call ‘thinking about nothing’ or ‘letting your mind wonder’ is crucial to everything my brain model is predicated upon – including how it generates intelligence and emotions. They are talking about a phenomenon in the brain which I refer to as ‘Threading’. This seemingly unimportant process becomes the basis of how learned information (knowledge) is used by the human brain to solve problems – and it is the process I have used to build robots and artificial agents that can think and innovate by themselves. By ‘directing’ this Threading process, the brain learns to solve problems – this is what we as humans might refer to as ‘thinking about a problem’ and this forms the basis of how I define ‘intelligence’ in my model. Basically we learn to solve problems by ‘directing’ the Threading process – focusing this constant random recollection of associations towards solving a problem is no different from how we force a web browser to only return content relevant to our search terms and not waste time by providing unrelated websites.

Fig 2. Emotional states emerge spontaneously during resting-state scans.

Procedure for classification of resting-state data. Scores are computed by taking the scalar product of preprocessed data and regression weights from decoding models. Image taken from original research article Decoding Spontaneous Emotional States in the Human Brain (https://doi.org/10.1371/journal.pbio.2000106.g001).
[Credit: Kragel PA, Knodt AR, Hariri AR, LaBar KS (2016)

So where do the emotions come in which they have detected?

Here is the beauty of how my brain model explains what they have detected: Every time an association is recalled by my model, it reevokes (regenerates) the net subjective emotion that was recorded when the association was formed. No memory is recollected without its emotion also being reevoked. So even when robots running my mind model tries to think about nothing, their artificial minds will keep on ‘Threading’ through associations and emotions will be recalled one after the other. Some of the recalled emotions will be so low in intensity that they will not be noticed by the robot – similar to what LaBar and his colleagues observed on the MRI scans and referred to as emotionally ‘neutral’ states.

I am personally very excited about these test results as my model is corroborated by exactly what the research team had found – it is nothing other than my process of ‘Threading’.

There is so much more that can be learned from these MRI tests as these effectively verify some of the most important aspects of the Xzistor Concept brain model. It gives credence to two new mathematical definitions of intelligence and emotions described by the model – something the neuroscientific and AI communities had been in search of for decades and are still searching for at the time of this article.

This constant ‘at rest’ recalling of associations and stored emotions, this process of Threading (we sometimes just refer to it as daydreaming) does not stop when we fall asleep. All that happens is that our eyes close and our motor movement is inhibited. And we start to dream – which is nothing other than just the same Threading process doing its thing over and over again in our brains.

Here is how Threading is described in my guide Understanding Intelligence: The simple truth behind the brain’s ultimate secret:

“When the brain is relaxed and do not have an immediate problem to solve, it will start to daydream. We can say this is equivalent to Threading. Just like Joe’s program jumped from book to book using some shared link word, our brains will start linking memories using some shared aspect (similar to a link word). One after the other our brains will present these memories to us in the form of recalled visual images, whilst also re-evoking the emotions associated with these images.”

Looks familiar? It is exactly what the MRI tests have shown.

I now want to invite you to investigate the brain by coming at it from another angle – a functional approach that is simple to understand and different from everything that is out there at the time of this article.

Follow the set of free (and easy to read) links below to get familiar with the Xzistor Concept brain model and understand why it provides a theoretical basis for the observations Professor LaBar and his team had made. If you are interested in AI, you will also learn how this model allows us to build robots and virtual agents with intelligence and emotions (actual feelings) that are principally no different from humans.

Start by reading the CNN article by Ashley Strickland on Professor LaBar and his team’s MRI ‘emotion scan’ research here:

https://edition.cnn.com/2016/10/06/health/spontaneous-emotions-brain-scans/index.html

Now read my free guide explaining the brain process of ‘Threading’ (I use a simple story about a bookshop owner to explain this concept in the guide – real easy reading).

https://www.researchgate.net/publication/351788696_Understanding_Intelligence_PrePrint_Version_for_Peer_Review

The bottom line from this is that Threading is a process by which associations are spontaneously recalled by the brain, and with every association the net emotion is also recalled.

If you are interested, you can now read how subjective emotions are generated in the brain as explained in my free guide: Understanding Emotions: For designers of humanoid robots:

https://www.researchgate.net/publication/350799890_Understanding_Emotions_For_designers_of_humanoid_robots_2nd_Edition

You will now start to understand why I say the Xzistor Concept brain model can principally explain all that happens in the brain – functionally that is, which means we do not need to get into the biology of neurons and neural networks to understand the simple high-level logic of the brain.

If you have more questions about my brain model, feel free to head over to the Frequently Asked Questions section on the Xzistor LAB website. Videos of the model built into robots and simulations are available on YouTube.

Up for even more?

Here is a final paper that explains what happens when we use the Xzistor Concept brain model to go in pursuit of the elusive concept of Artificial General Intelligence (AGI) and how it can help neuroscientists and AI researchers understand the brain and build robots with intelligence and emotions:

https://www.researchgate.net/publication/359271068_The_Xzistor_Concept_a_functional_brain_model_to_solve_Artificial_General_Intelligence

Thank you for your interest – and I hope you will always stay fascinated by the brain!

Consciousness – inside a functional brain model

Consciousness explained at the hand of the Xzistor Concept brain model.

I often use the trendy, yet elusive, concept of ‘consciousness’ to explain the advantages of using a simple functional model to explain the brain.

As a courtesy to my readers, I will write this blog entry in the form of a simple story – and in plain English.

The story is about three kids that grew up in London and end up together at the ‘International Symposium on Consciousness’ thirty years after attending the same primary school class.

The story goes like this.

When Amy was 9 years old, she asked her mother, a neurobiologist, what the word ‘consciousness’ meant. Her mother tried to explain to her daughter what her personal understanding of consciousness was: ‘It’s that part of your brain that generates your understanding of yourself and the world – it is what makes you different from an animal.’

Amy now had a definition of ‘consciousness’.

When Bongi was 10 he asked his dad what ‘consciousness’ was. His dad, a pastor who had immigrated to the UK from Africa, took out a book about ‘Black Consciousness’ and told the boy: ‘This is what consciousness is. It is what you believe in – your conviction of your place in the world and how you should be treated by others.’

Bongi now had a definition of ‘consciousness’.

Ryan grew up under difficult circumstances with his father – a drug addict – and one day, hoping to strike up a conversation with his father, asked him what the meaning of ‘consciousness’ was. His dad was high again and just laughed as he stared at the boy through clouds of blue smoke: ‘Mate, it is just all the crap in your head!’

Ryan now had a definition of ‘consciousness’.

Life was kind to all three of them and Amy, Bongi and Ryan made it through school and later all got degrees. Amy became a neuroscientist, Bongi a psychiatrist and Ryan an AI professor.

By some fluke, they all became interested in studying consciousness for their own personal reasons. Apart from coming across explanations of consciousness as part of their studies, they also spoke to many people along the way and read many scholarly articles and books covering different theories of consciousness.

They all met each other again 30 years later at the ‘International Symposium on Consciousness’. At this prestigious event there were many distinguished academics – experts from different fields that put forward their views on what consciousness was and how it was generated in the brain.

And again by some coincidence all three of them ended up at a breakout session where some of the world experts were debating the meaning of consciousness and giving each participant a chance to put forward their own understanding of what consciousness meant.

Amy’s understanding of consciousness had matured over the years, and she was now able to eloquently verbalize her thoughts around its meaning in scholarly terms, but she had always retained some aspect of what her mom had told her all those year ago. Bongi equally could quote all the prominent theories by hailed academics, but never forgot about what his dad told him – the idea that consciousness includes a level of conviction. Ryan had done a PhD in computational neurobiology and was looking at biological mechanisms to explain what consciousness was and where it could physically reside in the brain. After long discussions with colleagues or students he would always just smile and come back to what his dad had told him many years ago: ‘In the end it is just all the crap in your head!’

So who got it right? Whose explanation of consciousness is the best?

Amy, Bongi and Ryan had all formed different understandings over time with all of them retaining some ‘element’ of what they were originally told by their parents.

They had spoken to many people over the years who in turn had heard explanations from their parents and other people. Some explanations came from books written by people who had done research and spoken to other people who in turn had spoken to other people. All of these people must at some stage early in their lives have heard an explanation by someone else of what consciousness meant. And the person from whom they had heard it, would also have heard an original explanation from someone else early on in their lives. At no time did any of the experts at the symposium deliver concrete evidence of a mechanism in the mind to which the phenomenon of consciousness could be attributed to.

During their primary school days, Amy, Bongi and Ryan all came to know what a red ladybird was. They all had the same understanding of what the little creature was and how it looked. But that never happened with consciousness – because consciousness is invisible.

Even while neuroscientists are admitting that they do not yet have agreed explanations for how emotions and intelligence work in the brain, classes on consciousness are being offered, symposia are being organised and books are being published by so-called experts in the field claiming to have an explanation for the concept of consciousness.

All in the absence of evidence.

We all just heard about consciousness from others, different versions of what happens in our brains as we go about our lives. It is in effect just folklore.

This is where the functional brain model comes in.

As unlikely as it might seem, the correct functional model of the brain can offer something better – something concrete that could provide a simple definition of consciousness.

The Xzistor Concept is such a functional brain model. It introduces the concept of a ‘nexus area’ in the brain. A simple way to think about this is to envisage a central or focal area where all brain states required for decision-making are brought together at the same time – almost like calling out key members from an audience onto a stage to make a collective decision.

These brain states are the things we are constantly aware of like what we see, hear, feel, smell, our needs, our fears, how we move, what we think about – even what we dream about. The functional model processes a lot of sensory inputs and internal information, but only the pertinent results of those processes make it to the ‘nexus area’ where they are used to calculate the next behavior. For instance, in the human mind we are not aware of blood pressure regulating mechanisms or endocrine system corrections, but we are aware of what we are looking at, what we recognize, our emotions and what we are thinking about. The Xzistor Concept provides functional explanations for all of these processes that can be written in mathematics and turned into computer code. Unlike the biological brain where behaviors are generated from the simultaneous processing of incoming brain states in a parallel fashion, the computer program based on the Xzistor Concept functional model will process this information using a sequential algorithm – we will call it the ‘nexus area’ algorithm. The only parameters entering this algorithm, cycle after cycle, are the final results from all the different simulated brain processes – the key parameters required to calculate behaviors. These parameters represent brain states that the instantiation (e.g. as the digital brain of a robot) constantly needs to be aware of. And it is here in this ‘nexus area’ algorithm where we find the brain states we often like to list as part of consciousness – the things we as humans can ‘feel’ and are constantly being made ‘aware’ of…

So, where does consciousness live inside the human brain?

Don’t know – but we should search for the equivalence of a ‘nexus area’ algorithm within the brain’s biological structures, and who knows, maybe one day we will find out where consciousness is hiding.

In the meantime, I suggest we just stick with Ryan’s dad’s definition – that consciousness is just all the uhm…‘stuff’ in our heads…

Recent paper on the Xzistor Concept brain model here: Xzistor Concept

Can we make neuroscience go faster?

By making things simpler…

In recent times the concept of Artificial General Intelligence (AGI) has attracted a lot of attention, and is now being pursued by numerous high-profile research intuitions around the world. Interestingly, there is still no single consensus definition for AGI. What members of the AI community are clearer about, is what AGI is not – it is not ‘Narrow AI’ that can only use artificial intelligence to solve problems within narrow contexts or environments. Some have defined AGI as ‘Strong AI’ to indicate a wider ability to solve problems in non-specific contexts and environments. This should not be confused with the early definitions of ‘strong AI’. Ben Goertzel has defined what he refers to as the “core AGI hypothesis” stating that: the creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.

Due to the many divergent views and approaches towards AGI, it is not always clear whether researchers are simply pursuing a capability that can solve intellectual problems at a human-level or higher, or if they are specifically attempting to emulate humanlike thinking with machines using brain-inspired processes. In this case ‘humanlike’ refers to a functional approach derived from brain logic that are principally no different from the high-level functions performed by the human brain – it is focused on what is achieved (function) rather than how it is achieved (biological mechanisms).

To avoid confusion in this paper, Artificial General Intelligence (AGI) will simply be defined as the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. At the time of this paper many deem an AGI solution to be decades away. The approach towards an AGI solution discussed in this paper is based on the Xzistor Concept – a functional brain model which provides intelligent agents with humanlike intelligence and emotions. This brain-inspired cognitive architecture is ‘means agnostic’ – meaning that it can be instantiated in software or hardware, or combinations of both, and scaled to the limits of technology.  

We know AGI will require an understanding of the brain. This understanding will hopefully come from the neuroscientific community. Can this understanding be accelerated by taking a step back in time and re-looking at a simple functional approach when studying the brain? This might just be the secret…

Read more here

Can Avatars become sentient?

Can Avatars have their own emotions and intelligence?

That will mean they do not act merely as a vector for a living person, but become ‘aware’ in their own right…

I recently did a deep dive into the metaverse to see what all the fuss was about. I did not plan to spend much time there as I suspected it was early days, with a lot of technical work ahead.

I was not wrong.

So, is it really going to be that big? For sure…but not now, some time in the future.

Moving Teams meetings into a 3D environment with the ability to replace your face with a copycat Avatar is fine and fun – but not nearly as much fun as working with Xzistor robots in the metaverse.

One of the first robots I designed to run the Xzistor Concept brain model on was a simple differential-drive simulated robot in a 3D ‘learning confine’. It was just some C++ and OpenGL code (and a good couple of late nights I will not lie) and there it was – a simple robot moving about in a 3D room. And immediately it – I mean ‘Simmy’ – started to learn like a baby.

Here is a legacy pick (screengrab) from one of the first simulations about 22 years ago.

Legacy pic of Simmy – note archaic MS Office icons in top right corner!

Simmy learned by reinforcing all actions that led to solving a set of simple emotions. With a bit of initial help it quickly learned to navigate to the food source and push the correct button to open it. It also learned to avoid the walls as this made for some painful encounters. What was exciting about this robot was that it was given visceral (body) sensations – it had its own little body map in its brain – and these were then used as simple emotions to make it constantly ‘feel’ good or ‘feel’ bad. It was quickly evident that Simmy was really ‘feeling’ pain when bumping into the walls.

It was a big kick to see the facial expressions on this little robot – a simple frown or smile reflex based on the average internal emotional state.

A later refined version of the crude initial 3D simulation – and an Xzistor robot that can easily be let loose onto the streets of the Metaverse.

I still see people struggling to understand how one can provide robots with emotions – and I do not mean just mathematical ‘reward’ states to satisfy homeostatic imbalances. For me emotions must include somatic body states which will make the robot ‘feel good’ and ‘feel bad’. The trick how to do this is explained in my short guide below:

Click on image

Simmy also allowed me to put my ‘intelligence engine’ to the test which forms part of the Xzistor Concept brain model. I could turn this intelligence engine ON or OFF so that the little virtual robot either learnt like an animal (Pavlov’s dogs) or like humans (actually thinking to derive answers from previous experiences). This approach did not only offer a way to define intelligence in a scientific manner, but also provided an easy way to implement intelligence in robots.

The simplest test of intelligence I could inflict upon my intrepid little robot was to secretly change the button to open the food source from the GREEN button to the ORANGE button. After trying the GREEN button, Simmy figured out it should actually be the ORANGE button without any help from me. This was quite and exciting moment as one could actually see Simmy ‘think’ about it, and it proved that the intelligence machinery was working correctly.

This intelligence algorithm also provided the robot with the ability to understand ‘context’ which many AI researchers feel is still missing from current robot brain models. All of this is explained in my short (and surprisingly simple!) guide below.

Click on image

To start building Avatars that have their own emotions and intelligence, will merely require me to drop this 3D simulated robot into somebody else’s metaverse and perhaps steer it a few times past the food and water sources (and other Satiation sources – read my guides). In this way a little virtual Xzistor robot will learn by itself to navigate around its 3D environment. It will constantly keep on learning… and make new friends.

Make new friends?

The first thing these Xzistors (I guess they will take exception if I call them Avatars) will need to do is to see other objects and virtual characters. For this I will use a simple method called CameraView which provides the view of the 3D confine as seen by the simulated robot. This will be processed as an optic sense so that Simmy can see and recognize objects and other Avatars. Simmy will quickly learn to ‘appreciate’ friendly Avatars that share their food, water, etc. and befriend those that are FUN to play with!

The metaverse creates the perfect test ground or ‘sandbox’ where these Xzistor robots can be allowed to learn and become more intelligent without concerns of super-intelligent robots harming humans. If Simmy gets fed-up with Avatars hogging the food or water source and start throwing punches at them (yes we can also provide aggression as an emotion) we can always just push the RESET button on either the robot or the game.

Of course, Simmy has tactile sensing, else how can it feel pain when walking into walls – and this tactile interaction with objects and Avatars in the metaverse will obviously not be physical, but ‘calculated’. But Simmy won’t ever know the difference. We did design Simmy to ‘hear’ sounds and words, but it cannot smell and taste…yet!

Building an Xzistor-type virtual robot for the metaverse brings about numerous simplifications. The main advantage is no need for costly body parts, motors, batteries, cameras, sensors, onboard processors, etc. that need integration. We can come back to anthropomorphic (humanlike) shapes and it is no issue to make them keep their balance and not fall over obstacles. This might sound trivial, but a large part of why Bill Gates never saw his 2008 promise of a ‘robot in every home’ come to true, was the science-fiction led notion of a ‘Jeeves’ butler robot by many of that time – a home robot that would have spent much of its time tripping over carpets or toys – and which would have regularly fallen through the glass-topped coffee table.

What would have made much more sense at the time, and Gates eluded to these ‘peripheral devices’ in his article, was an Amazon-type storage robot – basically a box that runs on rails up the wall, across the ceiling and to the kitchen to fetch beer and peanuts and bring it back to the sofa – without getting wise or maty.

Science fiction has both inspired and misdirected many human pursuits of the future. Elon Musk punts his vision of humans becoming a multi-planet species – but building an expanded space station orbiting Earth will be much more practical than setting up camp on Mars. A simple engineering risk, safety and cost-benefit analysis should quickly point this out.

At the same time the ambitious endeavors of these inspiring individuals is what keeps me going!

Is the metaverse just another distant dream by tech drivers that had gone mad? Or will we one day move into a reality other than the physical realm that we have come to know – much like the the worlds portrayed in the movie The Matrix? The question would be a practical one: Can we ever produce enough server semi-conductors to run all these live 3D simulations? And will we be able to generate enough power to drive these electrical worlds and the cryptocurrencies that will undoubtedly fuel them?

I think we will find a way to achieve all of this.

The metaverse will steadily grow and become our main reality. In time it will become just too much trouble to engage the physical world where we will have to dress up to go to work, be quietly judged by our body mass, shape, looks, apparel brands – and be condemned for on occasion accidently passing wind whilst forgetting we are not on our home laptops with the mute button on. I firmly believe virtual reality and the metaverse is where it is at – it is where the naked ape is headed next!

One blue sky project we proposed already years ago was to release an Xzistor robot ‘copy’ of oneself into the metaverse.

How will this be achieved?

Without going into too much detail here, a Wizard App can be developed that asks an individual a few questions to score the individual’s general personality traits and preferences: temper, compassion, fears, favorite pastimes, sports, foods, games, likes and dislikes, values and detail around required attributes of a future dating partner – physical (brunet, blond, etc.) and interests (food, sport, games, leisure activities, etc.). The Wizard App will then translate these preferences into lower tier emotion engine indices to create a virtual Xzistor robot brain that can broadly represent the individual in the metaverse. Of course it is not going to be very accurate, but imagine checking back after work to see who your Xzistor virtual ‘copy’ robot had hooked up with in the metaverse while you were away – and who are the real people behind these Xzistor robots or Avatars.

It could start a whole new way of virtual dating!

Will we one day see Xzistors and Avatars getting married? Or will humans marry them in these mysterious virtual worlds? Who knows – your guess is as good as mine.

But when it comes to the metaverse – never say never!

Is ‘free will’ an illusion?

Do you sometimes feel you are separate from your brain?

Does it feel like your brain provides you with advice that you will sometimes listen to, and sometimes not?

Have you heard someone say: ‘My heart is telling me one thing, and my brain is telling me another!’

From time to time we all enter into this inner dialogue with ourselves where we sometimes reject what our brains tell us and rather just follow our feelings? Or the other way around.

Why does it sometimes feel as if your brain is a separate entity from yourself?

The debate as to whether we exert any control over our brains – i.e. if we have a ‘free will’ – has been raging on from the earliest times and is still today giving rise to lively debates. There are passionate camps for and against the concept of ‘free will’.

My Xzistor Concept brain model has managed to explain many of the mysterious things that happen in the brain to me. It has also helped me to understand why we perceive this ‘mental duality’ that makes us believe we have a ‘free will’.

The part of the brain that tells us WHAT to do…

My brain model clearly shows how one part of the brain has the specific task of telling us WHAT to do. We experience this as emotions – physical feelings based on homeostatic functions – that urge us to do things when these functions go out of balance (outside of their acceptable ranges). Some of these will urge us to pursue things e.g. food (hunger), and others will urge us to avoid things e.g. hazardous situations (fear).

Over the years we learn what to do to address these emotional demands. For instance, if you get hungry, you know you should walk to the kitchen. To achieve this you walk down the passage and at the Da Vinci print on the wall, you turn right into the kitchen. Similarly, if you are cold, you know you should walk to the drying cupboard which is located down the passage and left at the Da Vinci print – here you can turn up the central heating. We don’t know these navigational routes as babies, we learn about these over time.

We form associations and with a lot of patience from our loving parents, we eventually learn how to walk down the passage and either turn right or turn left to act on these emotions. When we get food or turn up the heat – we feel better and our behaviours that solved the problem are reinforced – stored in the brain as the correct behaviours to resolve these emotions in future.

In this way we come to form a whole database of associations (memories) that we can use when next we need to act on one of these emotions.

Part of the brain tells us HOW to do things…

Let’s now use the example where your mother has hidden a slab of chocolate in the drying cupboard. She told you that you are only allowed to eat some of the chocolate over the weekend. You are also aware that there are apples in the kitchen.

While watching TV in the lounge, you suddenly start to feel hungry. You get up and start walking down the passage. The hunger emotions will now automatically access that part of your memory where hunger solutions are stored including navigational cues. It will use visual information about your environment to narrow down the options…i.e. it will use the visual images of the passage to narrow down the options to apples or chocolate, and eliminate further afield food sources e.g. sushi, pizza, take-away fish and chips, etc.

Since the chocolate tastes better to you, the navigational cues to walk to the chocolate will be stronger and more persistent. But now the context around the chocolate reveals itself and suddenly you see your mother’s face again warning you that the chocolate is for the weekend. You are suddenly filled with fear that you will disappoint your mother, and this diminishes the appeal of the chocolate. In the end the fear is strong enough to make you rather navigate to the apples. You will feel as if you have ‘decided’ to rather go to the apples…

When we get an urge our brains will propose numerous options to us to solve the situation (in my model we call this directed Threading). It will also flash up images (along with their emotions) to create the context around each proposal. This context will eliminate certain options. The food might be too far away (restaurant), to unhealthy, take too long to prepare, etc. So the context around each food option will weaken or strengthen the emotional urge to pursue it. Negative connotations will erode the appeal of some source and positive connotations will strengthen the appeal.  

It will feel as if we actually made a decision based on the options presented to us by our brains.

That is indeed true. But it is also true that this decision was made by the processes (physical mechanisms) of the brain and there was no entity involved separate from the brain itself.

This decisional interaction between two parts of the brain, where a request is put to the brain by part A (WHAT) and options are presented by part B (HOW), then weighed up and whittled down to a final answer, all happens within the brain – driven by emotions and informed by learning – totally isolated from external influence. Even if someone were to yell at you to stay away from the chocolate, that information from the environment will only serve to change the ‘context’ of the chocolate option and affect the strength or weakness of the emotion to pursue the chocolate. Deciding to not go for the chocolate, is just a result of the same process in the mind – not by a separate or external entity.

And it is easy to think you are not just your brain, that your brain is a handy companion that accompanies you on your path through life providing advice. It really feels as if we can enter into conversations and debate with this helpful mental companion – sometimes agreeing and sometimes disagreeing. But both the ‘me’ and ‘my brain’ taking part in these dialogues, is the same entity that finally decides what you will do and what you will not do.

This squarely puts me in the camp of those believing that true ‘free will’ does not exist.

But I will never stifle a lively discussion around ‘free will’ when sitting with friends around the BBQ fire and enjoying a good glass of Merlot – as I can always decide to change my mind about ‘free will’ if I choose to do so.

Yes, I can change my mind. Or can I?

Understanding Emotions

Summary of a new book released by an author from the Xzistor LAB:

Title: Understanding Emotions: For designers of humanoid robotics

Author: Rocco Van Schalkwyk

For any questions around this publication – contact Rocco.

Here is a link to the original scientific paper by the author on which this book was based (click the title):

In the book Understanding Emotions: For designers of humanoid robots the author, Rocco Van Schalkwyk, translates his own paper (above) into a short guide of 45 pages using layman’s terms. This makes it an easy read for researchers and students.

As part of the author’s research into the brain, and the brain model that he has developed to control robots, he has discovered a really simple way to explain emotions. The purpose of this compact guide is to convey this simple approach to those who are intrigued by emotions and able to read a scientific paper or textbook.

The author’s simple approach will help the reader avoid the seemingly unending debate amongst neurologists, psychologists and philosophers as to what emotions are. It offers a practical ‘engineering-type’ explanation of how emotions work in the brain and how we can build machines with real humanlike emotions. The guide includes a short piece of pseudo code showing how the functionality can be incorporated into a computer program that provides physical robots and virtual agents with artificial emotions.

The author also provides links to his research website where the approach had been successfully implemented as part of brain model programs controlling virtual and physical robots.

Proud product of the Xzistor LAB now available on Amazon!

Author is on Researchgate here – www.researchgate.net/profile/Rocco-Van-Schalkwyk. And view his Amazon author page here Rocco Van Schalkwyk on Amazon.

Click on image

Inevitability of Existence

I found the response to my article Does Free Will Exist very interesting. It was fascinating to see how even ardent thinkers battled with the idea that our brains are us and we are our brains – and that we have no control over what our brains make us do.

They mentioned issues of initiative and freedom and the ability to make decisions, but every time it came back to the fact that the brain – and it’s wired processes – make those decisions on our behalf. The best way we can change how our minds work is to get special training or pour alcohol down our throats to make our brains work differently. And even those actions are actually decisions made by the brain – not ‘us’.

We simply cannot escape from the fact that our brains generate our thoughts, fears, emotions and decisions. And we? Well, we cart it around doing what it wants us to do to survive and stay comfortable…get food, oxygen and adequate entertainment.

Then it struck me the other day – it is actually worse!

Our brains comprise many neurons that are connected and consist of molecules. The molecules in turn comprise of atoms (and all the sub-atomic particles that make up the atoms).

These teeny weeny little particles follow the laws of physics slavishly, mindlessly. They do not have brains and they do not have personalities – they are bits of ‘non-nothingness’ that are swirled around by the laws that govern the universe.

But these dumb little minions – by virtue of being shunted around by the laws of physics – make the atoms ‘atom’, make the neurons ‘neuron’ and make the brain ‘brain’.

Our deepest hopes, fears, joyous experiences and moments of deep insight – well, actually we have nothing to do with them. They come about by the dumb stuff being swirled around obeying one thing and one thing only – the laws of nature.

I find this a little depressing.

I want to scream out: ‘No! I am not like that! I am a thinker, I am special! I am unique!’

But then I realise, it is just my brain making me say that…

Dust clouds around stars.

Dust clouds around stars that will over time form planets. If ever life evolves in this part of the universe, the creatures roaming these planets will be made up of the basic building blocks that once were these dust clouds – their cells, their limbs and their brains. And their brains will generate their thoughts and emotions.

Dust clouds around stars.
JACQUES KLUSKA, ET AL

These are enhanced telescope images showing the dust clouds around a distant star that will form planets over time.

Myth – the Brain is too Complex to Model

Many people who become interested in the brain are immediately overwhelmed by its seemingly infinite complexity – the billions of neurons (brain cells) and and the trillions of connections between them, and the fact that it is often hailed the most complex biological system in the known universe. Some also find it hard to believe it is possible to emulate the brain based on their own subjective experience of life – their ability to learn seemingly endless amounts, their rich and complex thoughts and feelings, their ability to find innovate solutions to completely new problems. Many researchers do not think there is a simple way to model the brain.

Because of the daunting complexity of the brain, many researchers have started to move away from trying to solve the brain in its entirety, and started to rather focus on specific aspects of the brain like bipedal motion and balancing, obstacle avoidance, object recognition, machine learning, neural networks, neuron models and task-orientated robotic applications.

I want you to stop asking: ‘How can we ever replicate all those neurons and connections to build a brain?’

Instead, I want you to rather ask: ‘What is it that all those neurons actually do?’

I believe I have found a simple way to understand and model the brain. I have discovered that the brain is so simple that it can be depicted as an A4-sized block diagram, and that my model is no more difficult to understand than it is to learn to drive a car – but it is important to open your mind and, for the moment, forget everything you have been taught about the brain or AI before. I have used my model to build virtual and real (physical) robots, capable of intelligence and emotions, which gives me confidence that my model is working correctly. Discovering the deepest secrets of the brain has been an amazing journey of discovery for me – and I want to bring this to the world in responsible way. For now I want to invite you to read up about my discovery on the Xzistor LAB website and look at the demo videos I have posted on my YouTube Channel. And hopefully this will get you interested in my world! May this become an amazing journey for you too!