Sta Hungry Stay Foolish

Stay Hungry. Stay Foolish.

A blog by Leon Oudejans

Natural Stupidity vs Artificial Intelligence

Introduction LO:

This comparison makes logical sense. If and when human stupidity is society’s default then artificial stupidity (eg, PS-2023, Tilburg University-2022) would make (much) more sense than artificial intelligence. Today, it’s impossible that technology would be more intelligent than humans.

Source: my 2017 blog

Human intelligence includes 4 areas:

  1. Known knowns: knowledge or facts;
  2. Known unknowns: beliefs or opinions;
  3. Unknown knowns: instinct or intuition;
  4. Unknown unknowns: fantasy or imagination.

Artificial “intelligence” is (mostly) limited to the area of known knowns (ie, knowledge, facts).

The 2nd category, known unknowns a.k.a. human beliefs, is the main reason for our natural stupidity (eg, the flat earth movement, islamic terrorism).

The 4th category, unknown unknowns a.k.a. imagination, is the main reason for human exceptionalism (eg, airplane, automobile, computer, language, mathematics, submarine, smartphone, telephone).

To some extent, beliefs could be programmed within artificial intelligence (eg, Three Laws on Robotics by Isaac Asimov). The existence of killer robots and/or drone warfare already shows that our natural killing instinct will be programmed within machines.

To a (very) limited extent, our instinct or intuition might be programmed within artificial intelligence (eg, external curiosity).

To a (very) large extent, I agree with philosopher, historian and bestselling author Yuval Noah Harari (b.1976): I’m more worried about the dangers of natural stupidity than AI.

TechSense: Natural Stupidity Versus Artificial Intelligence
By: Techsense Team
Date: 8 September (?year?)

“Machines that simulate human intelligence and mimic human actions are transforming industries. Artificial intelligence (AI) is different from other revolutionary technologies in that it stretches beyond the technology and engineering domains to include social sciences, behavioral sciences and philosophy. This is a strength, given the numerous AI applications in use today and its immense potential for the future. There’s also a drawback – just as AI imbibes human intelligence, it can also embed human stupidity.

Prejudices in training data

AI uses training data, comprising text, images, audio and/or video, to build a learning model and perform a particular task to a high degree of accuracy. Algorithms learn from this data and behave based on what the data has taught them.

Problem is, training data can contain prejudiced human decisions or reflect social or historical inequities even after variables such as race, gender or sexual orientation are removed. In 2016, Microsoft’s AI-based conversational chatbot for Twitter made news for the wrong reason after it began tweeting racist, misogynistic and anti-Semitic messages. It was the result of Twitter users taking advantage of the bot’s social-learning abilities to teach it to spew racist rants. While Twitter was having fun with the bot, the consequences of introducing biases into training data can have far-reaching consequences.

Choosing bad quality data

Biases apart, inconsistencies in big data can skew outcomes. A plainly stupid decision would be to use a data set that does not reflect a model’s use case. Determining whether the data is representative of the problem, and the impact on outcomes from combining internal and external data, is important. For example, selection bias may creep in when the chosen data is not representative of the future population of cases the model will encounter. There are real-world examples of facial recognition software using datasets containing 70-80% male and white profiles, to the inclusion of other genders, ethnicities and races.

Human intervention in data selection and clean-up is crucial but a lot depends on the reasoning capability of analysts. Subjectivity or an inability to grasp the impact of poor-quality data on the outcomes the AI application will deliver, can quickly diminish its utility and success.

Looking to the future

We are in the age of ‘weak AI’, indicating that artificial intelligence in use today has a narrow focus and performs one action, such as driving a car or recognizing faces. ‘Strong AI’ surpassing humans in just about every cognitive task, is expected to arrive in the future. Hopefully, the powerful AI machines in the years to come will be free of the human beliefs, biases, subjectivity and errors in judgment prevalent in current models.”



Framework Posts


Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest