“Microsoft has announced a deepening of its partnership with the company behind the artificial intelligence program ChatGPT by announcing a multibillion dollar investment in the business.” (Guardian) Similarly, it’s also accused of (assisted) plagiarism (eg, BI-2023, PC-Guide, Medium, Guardian-2022, TC-2022).
Is artificial intelligence (eg, my blogs, Wiki) now similar to artificial plagiarism of existing knowledge?
This discussion should not come as a surprise because nobody can explain consciousness, which includes knowledge (known knowns), beliefs (known unknowns), intuition (unknown knowns), and imagination (unknown unknowns). Perhaps, intelligence just measures consciousness (eg, IQ).
I suppose we tend to underestimate what we already know in our subconscious (eg, intuition). Moreover, we tend to overestimate the impact of a tool in gaining more knowledge. Apart from AI, other examples of (failed) holy grails are cryonics, immortality, Internet of Things, and the self-driving car.
In 2021, Erik Larson published his book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Amazon: “Larson argues that AI hype is both bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods.” Note: markings by LO.
Developments in technology follow a clear pattern, similar to the 2001 Daft Punk song:
- stronger: eg, tools;
- faster: eg, mechanization, automation;
- better: eg, robotics;
- harder: artificially intelligent robots.
In my view, these artificially intelligent robots will not have beliefs (known unknowns), intuition (unknown knowns), and/or imagination (unknown unknowns). However, these machines will process knowledge (known knowns) with immense speed (ie, faster) and accuracy (ie, better).
I suppose these artificially intelligent machines will be indispensable for (1) supporting a (rapidly) declining global population with (too) many seniors and (too) few children (eg, Economist-2023), (ii) for the leisure and/or AI romance markets, and – eventually – (iii) for space travel.
Once we would find out the how-what-when-where-who-why regarding consciousness then everything I wrote above could – and would – change. In my view, that is (very) unlikely to happen because consciousness is not limited to humans. In my view, universal consciousness would even stop us.
“First, intelligence is situational—there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole. Second, it is contextual—far from existing in a vacuum, any individual intelligence will always be both defined and limited by its environment. (And currently, the environment, not the brain, is acting as the bottleneck to intelligence.) Third, human intelligence is largely externalized, contained not in your brain but in your civilization. Think of individuals as tools, whose brains are modules in a cognitive system much larger than themselves—a system that is self-improving and has been for a long time.”
A quote by Erik J. Larson from his book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Stop! In The Name of Love (1965) by The Supremes
band, artist, lyrics, video, Wiki-band, Wiki-artist, Wiki-song
[Intro]
Stop in the name of love
Before you break my heart
[Verse 1]
Baby baby, I’m aware of where you go
Note: all markings (bold, italic, underlining) by LO unless in quotes or stated otherwise.
0 Comments