Can we make neuroscience go faster?

By making things simpler…

In recent times the concept of Artificial General Intelligence (AGI) has attracted a lot of attention, and is now being pursued by numerous high-profile research intuitions around the world. Interestingly, there is still no single consensus definition for AGI. What members of the AI community are clearer about, is what AGI is not – it is not ‘Narrow AI’ that can only use artificial intelligence to solve problems within narrow contexts or environments. Some have defined AGI as ‘Strong AI’ to indicate a wider ability to solve problems in non-specific contexts and environments. This should not be confused with the early definitions of ‘strong AI’. Ben Goertzel has defined what he refers to as the “core AGI hypothesis” stating that: the creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.

Due to the many divergent views and approaches towards AGI, it is not always clear whether researchers are simply pursuing a capability that can solve intellectual problems at a human-level or higher, or if they are specifically attempting to emulate humanlike thinking with machines using brain-inspired processes. In this case ‘humanlike’ refers to a functional approach derived from brain logic that are principally no different from the high-level functions performed by the human brain – it is focused on what is achieved (function) rather than how it is achieved (biological mechanisms).

To avoid confusion in this paper, Artificial General Intelligence (AGI) will simply be defined as the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. At the time of this paper many deem an AGI solution to be decades away. The approach towards an AGI solution discussed in this paper is based on the Xzistor Concept – a functional brain model which provides intelligent agents with humanlike intelligence and emotions. This brain-inspired cognitive architecture is ‘means agnostic’ – meaning that it can be instantiated in software or hardware, or combinations of both, and scaled to the limits of technology.  

We know AGI will require an understanding of the brain. This understanding will hopefully come from the neuroscientific community. Can this understanding be accelerated by taking a step back in time and re-looking at a simple functional approach when studying the brain? This might just be the secret…

Read more here

Ano

Rocco Van Schalkwyk (alias Ano) is the founder of the Xzistor LAB (www.xzistor.com) and inventor of the Xzistor Concept brain model. Also known as the 'hermeneutic hobbyist' his functional brain model is able to provide robots and virtual agents with real intelligence and emotions.

Leave a Reply

Your email address will not be published. Required fields are marked *