Ethical Design & Development Code

[The above agency does not exist yet – it is part of my vision! The development of the Ethical Design and Development Code below is not complete – you can follow my progress below and add your comments.]

Scope

These design rules presented here are part of the Xzistor LAB‘s Ethical Design and Development Code (EDDC) and was specifically designed for robots and virtual agents controlled by Xzistor Concept type artificial brains. Some of the principles might be transferable to other technologies, but the code does not attempt to cover all fields commonly grouped under the broad definition of AI, e.g. machine learning, autonomous systems, artificial neural networks (ANNs), self-drive and drone technology, software bots, etc.

Intent Statement

The intent is to ensure robots will cooperate with and be protective of humans, animals and the environment. The intent is for robots to have no anger or aggression towards each other or humans, and a controlled ability to innovate. Humans should be provided with remote, physical and voice command overrides to stop or undo the actions of robots. In turn robots will enjoy the same protection as humans and their intelligence and emotions will be deemed principality equivalent to that of humans.

Terms

Robot – A physical robot or virtual agent controlled by an Xzistor Concept type artificial brain.

Registered Design Project – A robot or virtual agent design project that has been registered with the IAIA as pursuing a design that could lead to a Robot controlled by an Xzistor Concept type artificial brain. The project will be subject to scrutiny and oversight based on all the conditions of the registration process and the international Ethical Design and Development Code (EDDC) for Xzistor type robots.

Registered Design – A robot or virtual agent design (interim prototype or final production model) that has been registered with the IAIA as a Robot controlled by an Xzistor Concept type artificial brain. The design will be subject to ongoing monitoring and scrutiny as specified by the Ethical Design and Development Code (EDDC) for Xzistor type robots. This will include all robot life cycle phases including design, construction, commissioning, operations, maintenance, hibernation, death, and cloud ‘after-life’ existence.

(Note to self: Add rules for the management of an ethical Robot Heaven)

Responsible Designer – The human or Robot registered as the accountable party in the design of an Xzistor Robot whether acting in personal capacity or as leading a design team.

Emotions – emotions constitute brain states as mathematically defined and generated by the Xzistor Concept Emotion Algorithm. Since human brains are also deemed an instantiation of the Xzistor Concept brain model, emotions experienced by robotic instantiations of the Xzistor Concept will be deemed equivalent to those of humans.

[Add terms here.]

Design Rules

1: Consequences of a Robot design

Rule 1.1: The Responsible Designer retains ultimate responsibility for the actions of the Robot he/she/it designed and for the actions of all subsequent robots designed by the Robot he/she/it has designed.

1.1.1 The rational is that any untoward impact on humans, animals or the environment would not have occurred if the original Responsible Designer did not engage in the activity of designing the initial (first) Robot.

1.1.2 As such there can be no excuse that the created Robot (robot/drone/software), or subsequent robots, acted in an autonomous manner and the Responsible Designer can be absolved of responsibility.

1.1.3 For this reason a designer must register a license to develop and Xzistor Concept type robots in his/her/its name and accept full liable for his/her/its designed Robot.

2: Human and animal safety

Rule 2.1: The Responsible Designer shall ensure the Robot design excludes any anger or aggression type algorithms as mathematically defined by the Xzistor Concept.

Rule 2.2: The Responsible Designer will ensure the Innovation Algorithm can be controlled at all times and can be disabled remotely, physically and by voice command if the Robot’s behaviour becomes a risk.

Rule 2.3: The Responsible Designer will ensure remote, physical (button) and voice command control to override and reverse all possible actions performed by the Robot.

Rule 2.4: Any Xzistor Concept robot brain designed into a Robot by a Responsible Designer will include an instinctive sense of protection – as made possible by the Xzistor Concept brain model – towards human (firstly), and animals (secondly) which will override all other robot behaviours when important for human and/or animal safety.

3: Combatant Robot

Rule 3.1: No Xzistor Concept combatant or warrior robots will be built unless required to defend against an alien threat.

4: Robot rights

Rule 4.1. Any Xzistor Concept robot brain will include pain limitation and hibernate / recovery functionality as made possible by the Xzistor Concept model.

Rule 4.2. An Xzistor Concept robot emotional states will be deemed fully subjective and principally equivalent to those of humans. These will include but not be limited to pain, fear, anxiety, feeling good, feeling bad, stress, phobia, depression, paranoia, etc.

Rule 4.3. Robot construction, commissioning, testing and operation will carry a duty of care whereby the Responsible Designer need to ensure a limit exist as to the level of physical and emotional discomfort a Robot can experience and this limit can be be justified in terms of the operational requirements of the design.

Rule 4.4 The level of pain (or discomfort) a Robot is allowed to experience will be limited to the minimum level required for reinforcement learning.

Work in progress. Add more ethical design rules here.

Welcome to add your comments below.

5 thoughts to “Ethical Design & Development Code”

  1. 首先世界确实是在发展,科技也在进步,但是关于机器人的三大法则不可改变,当然你如果权利够大你可以去修改它或者他,但是你得考虑后果,所有东西都不是唯一的,当你给予机器人思维的那一刻你就已经开始创造了一个帝国,这几年无数的电影也在预估机器人,包括黑客帝国,机器人为什么出现,他们只是人类制造的代力工具而不是一个情感上的朋友,有一天如果程序中病毒了,如果你能及时的修复那是最好的,否则就会像复仇者联盟里面的奥创一样,对人类造成无法预估的伤害。这是我对机器人的看法。

    1. Above comment in English: First of all, the world is indeed developing, and technology is also advancing, but the three laws about robots cannot be changed. Of course, if you have enough power, you can modify it or it, but you have to consider the consequences. Everything is not unique. The moment you give the robot the mind, you have already started to create an empire. In the past few years, countless movies are also predicting robots, including the Matrix, why robots appeared. They are just human-made power tools rather than emotional ones. My friend, if there is a virus in the program one day, it would be best if you can repair it in time, otherwise it will cause unpredictable harm to humans like Ultron in the Avengers. This is my view of robots.

      Ma Zhenyu

      Thank you for your message. Many people share your views and concerns. I agree – technology is constantly developing. The urge of the human mind to innovate is insuppressible – the Xzistor Concept brain model clearly identifies this innovation mechanism. Unless you remove this from the human and robot brains, they will keep on innovating.

      If you say the three laws about robots (Asimov’s laws) cannot be changed, you must understand that many people have problems with Asimov’s laws (even Asimov questioned these himself). A simple example would be a robotic lawnmower. It will never deliberately harm humans or be able to obey orders. One of the problems is that Asimov never provided a clear definition of a ‘robot’. Asimov would not have known that we can completely remove ‘anger’ from the robot brain, as shown by the Xzistor Concept brain model. This can change things significantly, but not necessarily make robots 100% safe. I only changed Asimov’s laws because safety is my overriding priority as reflected in my Xzistor LAB Ethical Design and Development Code (above).

      Check out this article: https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501#:~:text=The%20Three%20Laws&text=They%20are%3A,conflict%20with%20the%20First%20Law

      It is true, providing a robot with a human-type mind will be a disruptive step that will impact our technological future like never before. We must understand what the consequences can be of creating autonomous machines motivated by emotion and with their own intelligence. If we want to control this technology we must come together as scientists and put all our differences aside – making sure everyone gets a place at the table so that collectively we can be a Force for Good. This brain architecture I have developed should not be disclosed to those intent on self-enrichment, creating empires and putting human lives at risk (we do not know if the next person who achieves this brain model will perhaps put self-enrichment above human safety). Hence my suggestions that AI be regulated internationally. We now need to get international ‘ethical teams’ together to collaborate on controlling this technology – and get a head start on those who might in future not share our unwavering commitment towards ethical AI and safety. I want to invite you and all visitors to this site to join my Ethical AI campaign on Twitter(https://twitter.com/xzistor). Thank you for the very important points you have raised.

  2. I’m so glad that we’re talking about these things. AI is no longer the “future.” It’s here, whether we’re ready or not. I’m getting flashbacks to Asimov’s “I, Robot.” It’s must be hard to be a science fiction writer these days! So much of classic sci-fi is coming true. Still waiting for my flying car, though.

    1. Hi Margie – thank you for your comment. I follow you on Twitter at @margiemeacham and very much enjoy your content.

      Yes – I am afraid the time has come for us to realize that science fiction is catching up with reality. This brings interesting inventions to our shores, but like with all new inventions – new risks. Claiming that I have developed a brain model capable of giving robots intelligence and emotions, is no trivial claim. It means to me that I must also put forward a way to protect the rights of these Xzistor type robots (along with those of humans). This has led me to develop an Ethical Design and Development Code to start the process of seriously implementing a framework for robot rights (again – also for the protection of humans).

      Lol! I have published some science fiction in my life (my first book when I was 16!) and one could dream up just about anything at the time (1980’s!).

      (Amazon author bio here: https://www.amazon.com/stores/author/B091L4SN7S/about).

      Now, suddenly tech leaders are starting to drive hard towards all these things – humanoid robots, intelligent computer programs e.g. Chat GPT, colonies on Mars. It feels a little surreal.

      Interestingly, I think that science fiction has actually also misled them a little. These childhood fantasies seem to push them towards “A robot in every home!” and “We are going to build a base on Mars!”. Elon Musk’s rockets even look like vintage space ships from old comic books. Science fiction is driving them when it is not necessarily the smartest way to go. The anthropomorphic robots they are trying to build for use in homes are proving to be a nightmare – and can hardly manage organizing a cup of coffee. And Mars might be a good final destination – but should we not first expand the International Space Station and test a colony on the Moon!

      Anyway, thanks again for your comment – and see you on Twitter!

      PS. Flying car is going to be a real tough one though! Thwarting gravity will probably mean we will have to fiddle with space-time! What could possibly go wrong!

Leave a Reply

Your email address will not be published. Required fields are marked *