Sta Hungry Stay Foolish

Stay Hungry. Stay Foolish.

A blog by Leon Oudejans

Needs-Wants-Beliefs (5) – Learning from Mistakes

A few days ago, I watched a 2016 TED video on machine learning as this topic felt relevant given my recent blogs on AI, consciousness and self-awareness. Suddenly, my casual phrase in part 4 of these blog articles made perfect sense: learning from mistakes. Let me start with a graphic that outlines the differences and similarities in human and machine learning.

The main difference between humans and machines is fear. “According to evolutionists, feeling fear is a crucial element in human survival”, an excerpt of my 13 August 2015 blog – Who is afraid of whom? Fear taps into our unknown knowns (ie, intuition), known unknowns (ie, beliefs) and unknown unknowns (ie, imagination). Fear may also be based upon known knowns (ie, facts). See my 3 April 2016 blog on human firmware. I doubt that AI robots can learn about fear.

A second difference is the way of learning. Human learning is based upon teaching. Machine learning is mostly based on self-study. Teaching includes several important features: it explains why, when, what and how. There are some downsides to teaching: eg, attention span, bias, relevance, and subjectivity. Human self-study is often an integral part of teaching.

The TED video states that we (humans) do not even know what machines are learning to themselves in case of “machine learning”. They are given a set of data and they start data crunching. Machines may thus develop logic that is not logical to humans. Machines may also find logic that is new to us or disregarded by humans, based upon emotional, ethical or moral arguments.

The TED video concludes that we need to teach ethics and morals to AI robots. In 1942, Isaac Asimov introduced the Three Laws of Robotics in his short story “Runaround“. Essentially, these 3 laws prioritise human life over robots. These laws clearly conflict when robots transport human life, for example in autonomous, self-driving cars. Also see my 21 October 2016 blog.

Ethics and morals are part of 2 Belief systems: Philosophy (Knowledge domain) and Religion (Power domain). A 2014 question on ResearchGate supports my doubt whether ethics and morals are universal within humanity. Hence, ethics and morals in machines will face similar challenges.

The absence of fear in AI robots will cause premature self-destruction. This will become a problem given the limited availability of precious metals and/or rare-earth metals (eg, Phys) which are required to build them. Similar to humans, absence of fear in AI robots could – and thus will – become their biggest pitfall. An interesting and unexpected idea.

Archives

VIPosts

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest