Upcoming Projects

[Ideas on this page are always evolving and subject to change due to inputs from the patron community. Nothing is fixed here – we are just shooting the breeze. I will focus on what my patrons want to prioritise!]

Future Projects (Themes)

In this section I will provide you with short overviews of projects ahead of kicking them off. This will set the scene and explain what I aim to achieve, the assumptions, basic approach and anticipated/desired outcomes. As the project commences I will offer more detail discussions on the dedicated project ‘workbench’ pages where you can see the the unedited notes / progress / late night frustrations as it happens :-). You can leave your comments in the patron areas and patron forums. Enjoy!

Short Term (Ready to Proceed)

Designing Robot Senses (By Popular Demand)

It seems like those Labbers cracking on with their home AI labs, want some clarity on the integration of senses into an EV3 robot. This is easy and interesting as new sensors are coming onto the market almost daily – many by third party suppliers. All these are easy to integrate with the EV3 platform. This is really worthy of demo video – so here is enough interest I can start preparing to build such a video.

Note to self: Explain why WiFi and Bluetooth capability can not be regarded as additional senses.

Portal to area – here.

Deconstructing Depression

One of the aspects I always know I will be able to contribute to is an understanding of depression – and I mean not just all the clinical stuff we know but what we don’t know. How it works in the mind! When you can build an artificial brain model for a robot and make it feel depressed, you truly understand the mechanism and this will help find a way to exterminate this worldwide epidemic. I hope to do a short demo video on this soon! Let me know if this will be helpful / interesting!

Portal to area – here.

Robot Anger / Aggression

Demo Video (Published on YouTube)

Robot Fatigue

Demo Video (In Progress)

Robot Motion

Demo Video (In Progress)

Robot Entertainment

Demo Video (Published on YouTube)

Robot Phobia

Demo Video (Published on YouTube)

Longer Term (PI in the Sky)

Some ideas I have come up with (I know) will take longer and require added resources (possibly sponsorship and collaborations) and these are not on my radar right now. Occasionally my mind wonders off to these and I jot down a few thoughts. Some of these ideas have actually progressed quite far and revealed amazing new worlds – all of which in turn could offer many individual new projects – but I have to manage my time carefully because I still have my employment obligations to look after! This can only be a hobby for me!

2nd Life meets Xzistor Concept (avatars with intelligence and emotions)

A massive amount of work done already – I must dig it up for you! Think about it, these avatars are just effectively “Simmy’s” with different outer shells. Imagine a little ‘virtual Xzistor LAB‘ in Second Life with Simmy and other robots inside it! Freaky – but easy to do! Difference is, these virtual agents will have intelligence and emotions because they will be running off the Xzistor Concept brain code.

Note: Remember Simmy has no idea that he is living in a virtual world. Simmy ‘feels’ the floor, objects, the physical walls, pain from the cactus thorns, taste food, watches TV, hears noise – exactly Troopy does. 🙂

Portal to area – here.

The Simms (with intelligence and emotions)

The same as for Second Life above. We can give intelligence and emotions to Simms characters. They will have to learn first, but later we can transfer memory files to other characters. Remember I am already training robots and exporting their ‘experience files’ and then ‘importing’ these into other virtual / real robots – 1 button (“T-key”) to export and 1 button (“R-key”) to import. I will do a demo video this covering memory (including forgetting) because it is quite interesting to see a robot wake-up ‘dumb’ and then suddenly come to life when the memory (experience) file is imported. These are big .csv files – LOL! By now I am able to read a robot’s mind and life story by looking at its memory file. I can even fiddle with the files. For example I can add more fear of an specific object – cognitive engineering!

Of course I will have to teach the initial robot first, but then it is easy to convey this memory files (learning) to other virtual agents. Interesting option to make Simms (or Second Life) characters fall in love – could even be turned into a virtual dating app. where you add your avatar and preferences to the game and see if you hit it off with someone else’s avatar! Lots of ideas!

Portal to area – here.

Robot Heaven

A massive amount of work done already – fascinating project that turns many things on its head and addressed the question how robots die and where their souls go. Funny thing is – I actually managed to sort this in a very ethical way. Watch this space! Not so far-fetched if we think that Simmy does not know he lives in a simulation, so when Troopy dies, we can place his ‘brain’ in a simulated environment where we will model his ‘eyesight’, ‘tactile sense’ etc. as we have done for Simmy and we will simply transfer his memory file (all he has learned in his life).

If the simulated environment looks exactly like Troopy’s physical Learning Confine, he might not even realise when he ‘transitions’ into a simulated world. We can do a 3D scan of his body so his physical appearance in the simulation is also representative. Many interesting considerations as to the ‘rules of such a heaven’…and the need for some entertainment to fight the boredom!! This is a longer term discussion as Xzistor Concept robots will not die in the same manner as humans do. Why bother about robots dying? We need to remember my fundamental premise that Xzistor Concept robots will have intelligence and have emotions (feelings) similar to humans. It comes down to a moral/ethical opportunity…some will say an obligation.

Portal to area – here.

Dreams Are Us

Adapt the Xzistor Concept dashboard to show what Troopy is dreaming about – we can recall the photo images in his mind when he is asleep. This will require a slightly more powerful processor than what we are using at the moment (sponsor?)

If this sounds a bit far-fetched – just think carefully how easy it would be. Troopy currently takes a video of his environment – and takes snapshots (frame-grabs) from this video to analyse pixel-by-pixel. For each association there is a snapshot. When he dreams it is only an algorithmic regurgitation of these associations – a mixture of imagery, emotions and ‘other’ data. We will not only see the images, but the Xzistor Concept explains exactly why his brain chose to build his dreams the way it did – we will see the role of recency, emotional intensity and repetition that weights associations in the brain during dreaming and the role of the ‘other’ data. You will look at this and understand exactly why you dream what you dream. So, I am really keen to see if I can flash up the visual images of what Troopy or Simmy think of when they dream…..really exciting! We should be able to see the images flash up and distort in real time while he sleeps and dreams. The distortion is the bit that interests me – because it creates new images (e.g. faces, creatures, scenarios) never experienced before combining existing images and warping these by using a fairly simple algorithms – using the ‘other’ data.

Portal to area – here.

Human Brain Replacement

Done some work on this but this is difficult...yes I have a model for an artificial brain…but integrating it with the biological brain is not easy…it is a fascinating challenge though and I need to show you the machine I have designed that a human must climb into when controlled by an Xzistor Concept brain. I am lurking around Elon Musk‘s Neuralink project to see how it can inform my efforts. Go Elon! Go Neuaralink!

A very interesting aspect of this study I am undertaking is using the Xzistor Concept architecture and mapping it to the human (biological) brain. I have just started this but it is interesting to read statements like ….this part of the brain seems to be involved in hunger, aggression, reward…bla..bla..and my brain model will so eloquently explain why these seemingly unrelated functions might pass through the same piece of brain tissue. No rush for me on this – but I am pulled to this by the urge to map my architecture to the biological brain – because it looks promising! Yes, my model explains urges too!

Portal to area – here.

Ultimate Companion Robot

I wish this was not the case…but the Xzistor Concept is eminently suitable for a companion (love?) robot. This is because we can give a companion robot emotions – principally no different from human emotions. It can be given simple intelligence with learning e.g. recognize faces and contextualize in terms of experience. The companion robot does not need to move around a lot so the field of view do not get filled with many different optical states which needs heavy processing while the robot navigates around say a room or confine. If there is a lot of interest, I might pursue…even design something! Where is the world headed? Of course this will be everywhere in a 100 years’ time? Marriage to companion robots….yes of course…it’s coming! A little Freaky…!

Portal to area – here.

Mars Colony of Xzistor Bots

Just a typical design for an Xzistor Robot that can live and work on Mars. They will not get homesick, miss their families, get bored with the landscape or choke on the carbon dioxide atmosphere. They can evolve into a socially cohesive colony and perform many different tasks while they evolve and learn.

Portal to area – here.

Mechanical Only Sentient Robot

I have a weird irrational obsession that I cannot get rid of. I want to build a mechanical (no electronics) robot with intelligence and emotions. I know this sounds absurd, but think about this: My brain model takes the ‘functions’ in the biological brain and get computer chips (along with software) to perform these. I know I can have these ‘functions’ also be performed by mechanical systems. Imagine a massive rusty steel robot in the park that will ‘remember’ kids and ‘like’ or ‘dislike’ them. All you need is to weld together the ‘functions’ provided by the Xzistor Concept. I can’t get this out of my mind! This is so unnecessary – what am I trying to prove?! Simplicity perhaps? Maybe triumph over having hacked the brain and revealed its long-held secrets?

Portal to area – here.

Designing an Uberbot

Ever since my discovery I have been thinking about where this brain model will take us in future. I know this is the architecture that will drive the future ‘Uber’ bots. I believe this because once you understand the Xzistor Concept, you realise there is only one principal model. It will be vitally important to proceed on a safe basis as these machines will eventually get much smarter than humans. I have a very clear vision of how this can be taken forward in a safe and secure way – perhaps the only safe way possible – that will draw humans and machines together in future and not apart…as we move closer to the Singularity.

Portal to area – here.

New Processor for Parallel Brain

Ideally, the Uberbot above will be driven by one or more parallel processor (typicall artificial neural networks ANNs). I had no intention to look at these designs but my brain model has indicated that there could be another approach other than parallel ANNs or multi-core serial. Here we have one area where I sometimes go a little ‘…is that really all I consist of…’

I can best describe the functions passing information around the brain as a couple of rugby teams passing balls around – and you see how it works and you think ‘where am I amidst all of this…is that all I am..?‘. Yes, the there is only information in the brain, information that gets passed around – and that makes us happy /sad, makes us feel we can explode with anger and make us see and smell a green lawn. It is only ever information, bits and bytes, 0’s and 1’s. And the Xzistor Concept explains why it need not be more than that…

This will take a while – no plan to expedite.

Goodness me, the colour GREEN only exists in the mind. But when you look inside the mind there is no GREEN…only information that we call GREEN. But when I look at something that is GREEN, it looks so… very GREEN! 🙂

PS. Extensive use of analogue systems. Semiotic approach.

Portal to area – here.

Asymptotic mirror inflection mechanism for semiotic processor.

Human Poker Table Brain Game

I often think what will be the simplest way in which I can explain my brain model to others. Imagine I take all the Xzistor Concept functions and get humans around a poker table to perform these functions. Will this ‘human system’ act like a brain – can it generate intelligence and emotions. I am 99% sure it can and can become a very interesting human experiment. We can even pitch 2 poker tables against each other. Can make one table narrow minded (engineer) and the other lateral minded (artist) and see which get to the answer first. This will probably be the best way to explain the Xzistor Concept to the uninitiated. IP protection could be an issue here.

Portal to area – here.

Warship Analogy

Another way to describe the working of the brain to those unfamiliar to the Xzistor Concept. Lots of work done already. IP protection could be an issue here.

Portal to area – here.

I love these far-fetched ideas that might turn into future projects. To be very honest, once you have my brain model and understand the whole brain game…it is actually the worlds that it opens up that is more exciting! My advice to you is…stick around!