Sta Hungry Stay Foolish

Stay Hungry. Stay Foolish.

A blog by Leon Oudejans

AI needs embodied cognition??

Intro LO:

There are four levels in human consciousness:

  1. Unknown unknowns resulting in fantasy (eg, fiction);
  2. Known unknowns resulting in beliefs (eg, science-fiction);
  3. Unknown knowns resulting in intuition (eg, innovation);
  4. Known knowns resulting in knowledge (eg, science, technology) and in tools.

The Axios AI+ article below is indeed consistent with “ancient” science-fiction (eg, humanlike creatures). However, modern science-fiction is about an interconnected (global or universal) mind. Having a body would be an obstacle for such an entity. Quite often such sci-fi entities have dark, evil souls.

The mind-body problem in AI, according to Axios AI+, seems farfetched (to me). Perhaps, the idea in this Axios article is (ultimately) related to artificially intelligent military robots. Then this article would indeed make much logical and rational sense.

At the age of 64, my ageing body is slowly becoming an obstacle, according to my mind. My mind wants (much) more than my body allows. Moreover, I have regular power-naps to refresh my body and my mind. My daily writing needs lots of energy.

According to the Sumerian civilisation, our soul can travel during our sleep and after our demise. I believe in that. Our body is limited to space and time, our mind is limited to time, and/but our soul is limitless. Actually, it might be a solution rather than a “mind-body problem”.


AI’s mind-body problem (Axios AI+)

By: Ina Fried and Ryan Heath

Date: 15 March 2024

Scientists using AI and other tools to simulate fruit flies, rodents and human toddlers aim to understand a key aspect of natural intelligence — and that research could move today’s generative chatbots up the AI ladderas Axios’ Alison Snyder reports.

Why it matters: ChatGPT doesn’t have a body — but some AI researchers think “embodied cognition” is a necessary ingredient to achieve the field’s holy grail of artificial general intelligence, or AGI.

  • Others, including ChatGPT creator OpenAI, are betting all they need to reach AGI is to keep scaling up today’s large language models with more data and more computational power.

The big picture: Language, reasoning and other abstract skills tend to get the most credit for human intelligence. But gaining knowledge of how the world works by walking, crawling, swimming or flying through it is an important building block of all animal intelligence.

  • A group of prominent AI researchers last year advocated for an “embodied Turing test” to shift the focus away from AI mastering games and language, which are “well-developed or uniquely human, to those capabilities — inherited from over 500 million years of evolution — that are shared with all animals.”

How it works: Teams of neuroscientists, anatomists and machine learning researchers around the world are building detailed virtual models of rodents, flies and human infants.

  • Researchers from Google DeepMind and HHMI’s Janelia Research Campus have built a virtual fruit fly by combining an anatomical model of the fruit fly skeleton, simulations of the physics a fly experiences (such as fluid dynamics, adhesion and gravity) and an artificial neural network trained on fly behaviors.
  • The behavior of the virtual fly is compared to the behavior of a real fly to update the virtual model until it matches the real bug’s actions — walking, flying and crawling upside down.
  • Members of the team previously built a virtual rodent.
  • Researchers at EPFL in Lausanne, Switzerland, also published a virtual fly model late last year.

The goal is to understand “how the body mediates between the brain and the world,” says Srinivas Turaga, a neuroscientist at Janelia and co-author of the preprint paper about the virtual fly posted yesterday.

  • Eventually, these models might be combined with diagrams of how neurons in the brain are connected with one another — “connectomes “— to try to understand how a network of neurons gives rise to a particular behavior.
  • “The body and the nervous system evolved together,” Turaga says. “And so intelligence, in some sense, isn’t just in the brain. There’s also mechanical intelligence” that helps animals move.

The intrigue: Embodied cognition also helps animals understand how the world works — by experiencing it.

  • “There’s an argument to be made that biological systems learn from interacting with the world,” says Jochen Triesch of the Frankfurt Institute for Advanced Studies.
  • “Most machine learning systems today learn by basically passively absorbing large data sets, whether it is video or images or captioned images,” Triesch says.
  • Learning through interaction with the world is something “really essential that most of the machine learning community is right now completely missing.”

Triesch and his colleagues are interested in human cognitive development and have developed MIMo, a virtual human model with the body of an 18-month-old child with five-fingered hands. Its virtual body senses its surroundings with binocular vision, proprioception and a full-body virtual skin.

  • MIMo isn’t as detailed as the fruit fly model, but that makes it much faster to simulate, Triesch says. There is a tradeoff between the level of realism and the computation required, and the MIMo researchers believe the critical part of their model is the touch-sensitive skin rather than the exact body shape.

Reality check: It’s an open question — and debate — whether information about the brain-body relationship gleaned from neuroscience studies can be used to teach machines to work in the physical world.

  • “In AI, that’s the hardest problem yet,” says Aran Nayebi, a postdoctoral researcher at MIT who works at the intersection of AI and neuroscience to try to reverse-engineer neural circuits, including the visual system in mice.”

Source:
https://www.axios.com/newsletters/axios-ai-plus-528bd5f1-d810-4324-ae4b-00ceff54b121.html

Archives

Framework Posts

1 Comment

  1. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest