Sta Hungry Stay Foolish

Stay Hungry. Stay Foolish.

A blog by Leon Oudejans

AI doesn’t forget, and that’s a problem (Axios)

18 January 2024

0

Axios title: AI doesn’t forget, and that’s a problem
By: Ryan Heath and Ina Fried
Date: 16 January 2024

Users want answers from artificial intelligence, but sometimes they want AI to forget things, too — creating a new category of research known as “machine unlearning,”Axios’ Alison Snyder reports.

Why it matters: Interest in techniques that can remove traces of data without degrading AI models’ performance is driven in part by copyright and “right to be forgotten” laws, but also by concerns about biased or toxic AI outputs rooted in training data.

Deleting information from computer storage is a straightforward process, but today’s AI doesn’t copy information into memory — it trains neural networks to recognize and then reproduce relationships among bits of data.

  • “Unlearning isn’t as straightforward as learning,” Microsoft researchers recently wrote. It’s like “trying to remove specific ingredients from a baked cake — it seems nearly impossible.”

Driving the news: machine unlearning competition that wrapped up in December asked participants to remove some facial images used to train an AI model that can predict someone’s age from an image.

  • About 1,200 teams entered the challenge, devising and submitting new unlearning algorithms, says co-organizer Peter Triantafillou, a professor of data science at the University of Warwick. The work will be described in a future paper.

What’s happening: Researchers are trying a variety of approaches to machine unlearning.

  • One involves splitting up the original training dataset for an AI model and using each subset of data to train many smaller models that are then aggregated to form a final model. If some data then needs to be removed, only one of the smaller models has to be retrained. That can work for simpler models but may hurt the performance of larger ones.
  • Another technique involves tweaking the neural network to de-emphasize the data that’s supposed to be “forgotten” and amplify the rest of the data that remains.
  • Other researchers are trying to determine where specific information is stored in a model and then edit the model to remove it.
  • One obvious way to remove the influence of a specific piece of data is to take it out of the training data and then retrain the model, but the high cost of computation means that is basically a “non-starter,” says Seth Neel, a computer scientist and professor at Harvard Business School.

Yes, but: “Here’s the problem: Facts don’t exist in a localized or atomized manner inside of a model,” says Zachary Lipton, a machine learning researcher and professor at Carnegie Mellon University. “It isn’t a repository where all the facts are cataloged.”

  • And a part of a model involved in knowing about one thing is also involved in knowing about other things.

Zoom in: There’s particular interest in unlearning for generative language models like those that power ChatGPT and other AI tools.

  • Microsoft researchers recently reported being able to make Llama 2, a model trained by Meta, forget what it knows about the world of Harry Potter.
  • But other researchers audited the unlearned model and found that, by rewording the questions they posed, they could get it to show it still “knew” some things about Harry Potter.

Where it stands: The field is “a little messy right now because people don’t have good answers to some questions,” including how to measure whether something has been removed, says Gautam Kamath, a computer scientist and professor at the University of Waterloo.

  • It’s a pressing question if companies are going to be held liable for people’s requests that their information be deleted or if policymakers are going to mandate unlearning.
  • Neel says, “For simple models, we know how to do unlearning and have rigorous guarantees,” but for more complex models there isn’t “consensus on a single best method and there may never be.”

What to watch: For low stakes problems it might be sufficient to stop a model from reproducing something verbatim, but serious privacy and security issues might require complete unlearning of information. 

  • Here, Lipton says, near-term policy mandates should “proceed under the working assumption that (as of yet) mature unlearning technology does not exist.” “

Source:
https://www.axios.com/newsletters/axios-ai-plus-117900d2-3a2c-4741-b807-bb8f2f9bb035.html

Archives

VIPosts

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest