Futurism title: 42 percent of CEOs think AI may destroy humanity this decade
Futurism subtitle: “It’s pretty dark and alarming.”
By: NOOR AL-SIBAI
Date: 17 June 2023
Introduction LO:
Clearly, I belong to the 58% of people who are “not worried” because the risk of AI is – indeed – “overblown“.
As long as machines will not have beliefs, the risk of Artificial Intelligence (AI) is minimal.
The likelihood or chance that machines can have beliefs is also minimal because we (humans) do not even know what consciousness is. Moreover, we may never know that.
Why?
In my view, individual human consciousness represents an individual “upload / download network link” to universal consciousness – a.k.a. panpsychism (eg, Wiki). My idea is somewhat similar to distributed network computing.
As long as we do not understand consciousness (eg, how, what, when, who, where, why), I fail to see how we (humans) would ever be able to create consciousness in machines (eg, robots).
To quote the Dutch poet C. Buddingh’:
“Inspraak zonder inzicht leidt tot uitspraak zonder uitzicht.” (source). English translation: Input without insight leads to output without perspective (source).
If one fails to understand the how-what-when-who-where-why of a topic, then any warning (see below) is pointless.
“Big Names
A large proportion of CEOs from a diverse cross-section of Fortune 500 companies believe artificial intelligence might destroy humanity — even as business leaders lean into the gold rush around the tech.
In survey results shared with CNN, 42 percent of CEOs from 119 companies surveyed by Yale University think that AI could, within the next five to ten years, quite literally destroy our species.
While the names of specific CEOs who share that belief were not made public, CNN notes that the consortium surveyed during Yale’s CEO Summit event this week contained a wide array of leaders from companies including Zoom, Coca-Cola and Walmart.
The breakdown of beliefs is pretty striking: while 34 percent of the surveyed CEOs say they think AI could extinguish humanity in the next 10 years and an additional eight percent said the same could happen within the next five, 58 percent say they’re “not worried” and that they don’t think such a catastrophe will happen at all.
Ironically, 42 percent of the CEOs also said they think the risk of AI is overblown.
“It’s pretty dark and alarming,” Yale’s Jeffrey Sonnenfeld, who conducted the survey during the virtual summit event, told CNN.
Over/Under
When it comes to the upsides of AI, the CEOs were still divided, though much less so than when asked about its dangers.
A mere 13 percent of those 119 CEOs said that they think the touted benefits of AI were overstated, while the remaining 83 said that they don’t think AI’s potential has been exaggerated. No surprises there — CEOs have been salivating for some time over AI’s potential to save them money by streamlining workflows and, of course, replacing human workers with much-cheaper algorithms, even if those AIs are far from being up to human snuff thus far.
Overall, Sonnenfeld said that the CEOs he surveyed can be broken up into five camps: the “curious creators” who “are like Robert Oppenheimer, before the bomb”; the “euphoric true believers” who can only see the tech’s good side; the “commercial profiteers” who “don’t know what they’re doing, but they’re racing into it”; the “alarmist activists”; and the “global governance advocates.”
“These five groups are all talking past each other,” the Yale professor said, “with righteous indignation.”
Given the alternating panic and push cycles we’ve seen from CEOs so far, we’ve gotta say we agree — and it’s wild that the people who will be making the hiring, firing, and investment decisions regarding AI haven’t yet come to anything close to consensus on it.”
Source:
https://futurism.com/the-byte/ceos-ai-destroy-humanity
0 Comments