Recently, I have noticed many articles on Artificial Intelligence (AI), including this one: Artificial Idiocy. It was written by Slavoj Žižek, Professor of Philosophy at the European Graduate School, and International Director of the Birkbeck Institute for the Humanities at the University of London.
I also noticed an excellent article by Wim Naudé (b.1968), an economist and scholar:
– Daily Maverick, 3 April 2023: Open letters, AI hysteria, and airstrikes against data centres – why is the Tech Nobility freaking out?
– NRC, 9 April 2023: Hysterie over AI verbloemt tekortkomingen tech-industrie.
In my view, the current AI articles are mostly a hype. Nevertheless, its development is necessary following the information explosion and the resulting (human) information overload. In my view, AI will eventually result in (very) many expert systems, supporting both generalists and specialists.
Hence, my educated guess: AI will – eventually – create (very) many expert systems based upon our known knowns. The next step is to build decision support systems that will include assumptions. Latter may come close to integrating beliefs because our assumptions are rooted in our beliefs.
I suppose those future systems must assess, and explicitly mention, the degree of accuracy of any statement that includes assumptions. Our assumptions and our beliefs make humans the most dangerous species. Integrating those will thus also make AI dangerous. Hence, our fears.
“The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”A quote by Stephen Hawking (1942-2018) as mentioned in a 2017 Wired UK article.
Note: all markings (bold, italic, underlining) by LO unless in quotes or stated otherwise.