Intro LO:
Trust issues are probably the most fundamental issues between humans, and also in primates like chimpanzees. Hence, trust issues in artificial intelligence (AI) make perfect sense. Moreover, the word intelligence in AI is utterly misleading. Hence, articles about Artificial Idiocy. Also see my related 2023 blog.
Artificial “intelligence” is based on human sources (eg, articles, blogs, facts, opinions). Most human sources are biased because of choices, preferences, and priorities of an ideological nature. I aim to maximise pragmatism in my writing but it’s very hard – and probably impossible – to escape ideology.
Karl Popper. “the 20th century’s most influential philosophers of science“, recognised that trust issue and proposed using these three categories instead:
- absolute truths (ie, truth is always and everywhere),
- objective truths (ie, facts),
- subjective truths (ie, opinions).
Most of our truths are actually opinions. Even our so called facts are restricted in time and/or in space. Example: our sun will turn in a red giant in about 5 billion years, and will subsequently explode in about 10 billion years (eg, Science Alert-2022). Hence, Karl Popper argued that absolute truths may not exist.
Given the above, trust issues in artificial intelligence (AI) – and in human intelligence – are “normal” and cannot be avoided.
Axios Science: The world’s AI trust divide
By: Alison Snyder
Date: 19 January 2024
“The world is hurtling forward with developing AI tools and rolling out products and services. That’s causing anxiety for some and excitement for others attending the World Economic Forum this week in Davos.
The big picture: The divide in optimism about AI echoes broader concerns about how innovations are implemented and the role of scientists.
- New data released this week from the Edelman Trust Barometer found 74% of people surveyed said they equally trust scientists and peers for the truth about innovations, compared to 66% for company technical experts, 51% for CEOs and 47% for journalists.
- But people who think new technologies are being poorly managed trust their peers more than scientists.
- The trend is putting “innovation in peril” and rapid innovation “risks exacerbating trust issues, leading to further societal instability and political polarization,” according to the report.
Driving the news: AI dominated the storefronts rented out by corporations, governments and non-profits in Davos and featured prominently in the week’s programs and discussions.
- The rush by companies to roll out AI-based products and services is in part driven by fear of getting left behind, says futurist Amy Webb, who advises companies on long-term risks. “Some don’t even have a problem to solve.”
- “Trust is an obligatory adjacent phrase to AI,” says Webb, the CEO of the Future Today Institute. “It doesn’t mean people aren’t working on it. But in this and other contexts, if the word trust isn’t said, it’s expected that something else, something more nefarious, is happening.”
What’s happening: Some AI developers highlighted how they are finding ways to build trust and transparency in their technologies.
- ClimateGPT, a new AI tool for researchers, policymakers and business leaders, allows users to pose questions about climate change and trace the data sources for responses. Blockchain technology creates a public ledger of any changes made to the model. (More on that below.)
- Others are bringing users into the collection of data used to train their AI models. Pittsburgh-based Gecko Robotics, which sponsored an Axios event in Davos, deploys sensors on power plants, bridges, and other key pieces of infrastructure and trains algorithms on that data along with information from experts at the site where the data was collected.
- It creates “a system of record to understand and predict what’s going to fail when to extend the use of infrastructure and ensure through both that we don’t have a catastrophic failure with huge environmental impact,” says Jake Loosararian, the company’s founder and CEO.
But it isn’t necessarily a technical fix: Gabriele Ricci, chief data and technology officer at pharmaceutical company Takeda, says the company is focused on creating an internal culture that is “looking constantly at the way we interact with customers to build trust. It’s built on every single interaction.”
- “It’s not a tech strategy, but a business strategy for the digital world,” he says. “It’s a mindset shift.” “
Source:
https://www.axios.com/newsletters/axios-science-d18917a7-5b79-471d-9db9-fc37938dfd4f.html
0 Comments