Businessnewshubb
Advertisement
  • Home
  • World
  • Business News
  • Markets
  • Startup
  • Contact Us
No Result
View All Result
  • Home
  • World
  • Business News
  • Markets
  • Startup
  • Contact Us
No Result
View All Result
Businessnewshubb
No Result
View All Result
Home Startup

OpenAI’s new tool attempts to explain language models’ behaviors

May 9, 2023


It’s often said that large language models (LLMs) along the lines of OpenAI’s ChatGPT are a black box, and certainly, there’s some truth to that. Even for data scientists, it’s difficult to know why, always, a model responds in the way it does, like inventing facts out of whole cloth.

In an effort to peel back the layers of LLMs, OpenAI is developing a tool to automatically identify which parts of an LLM are responsible for which of its behaviors. The engineers behind it stress that it’s in the early stages, but the code to run it is available in open source on GitHub as of this morning.

“We’re trying to [develop ways to] anticipate what the problems with an AI system will be,” William Saunders, the interpretability team manager at OpenAI, told TechCrunch in a phone interview. “We want to really be able to know that we can trust what the model is doing and the answer that it produces.”

To that end, OpenAI’s tool uses a language model (ironically) to figure out the functions of the components of other, architecturally simpler LLMs — specifically OpenAI’s own GPT-2.

OpenAI’s tool attempts to simulate the behaviors of neurons in an LLM. Image Credits: OpenAI

How? First, a quick explainer on LLMs for background. Like the brain, they’re made up of “neurons,” which observe some specific pattern in text to influence what the overall model “says” next. For example, given a prompt about superheros (e.g. “Which superheros have the most useful superpowers?”), a “Marvel superhero neuron” might boost the probability the model names specific superheroes from Marvel movies.

OpenAI’s tool exploits this setup to break models down into their individual pieces. First, the tool runs text sequences through the model being evaluated and waits for cases where a particular neuron “activates” frequently. Next, it “shows” GPT-4, OpenAI’s latest text-generating AI model, these highly active neurons and has GPT-4 generate an explanation. To determine how accurate the explanation is, the tool provides GPT-4 with text sequences and has it predict, or simulate, how the neuron would behave. In then compares the behavior of the simulated neuron with the behavior of the actual neuron.

“Using this methodology, we can basically, for every single neuron, come up with some kind of preliminary natural language explanation for what it’s doing and also have a score for how how well that explanation matches the actual behavior,” Jeff Wu, who leads the scalable alignment team at OpenAI, said. “We’re using GPT-4 as part of the process to produce explanations of what a neuron is looking for and then score how well those explanations match the reality of what it’s doing.”

The researchers were able to generate explanations for all 307,200 neurons in GPT-2, which they compiled in a dataset that’s been released alongside the tool code.

Tools like this could one day be used to improve an LLM’s performance, the researchers say — for example to cut down on bias or toxicity. But they acknowledge that it has a long way to go before it’s genuinely useful. The tool was confident in its explanations for about 1,000 of those neurons, a small fraction of the total.

A cynical person might argue, too, that the tool is essentially an advertisement for GPT-4, given that it requires GPT-4 to work. Other LLM interpretability tools are less dependent on commercial APIs, like DeepMind’s Tracr, a compiler that translates programs into neural network models.

Wu said that isn’t the case — the fact the tool uses GPT-4 is merely “incidental” — and, on the contrary, shows GPT-4’s weaknesses in this area. He also said it wasn’t created with commercial applications in mind and, in theory, could be adapted to use LLMs besides GPT-4.

OpenAI explainability

The tool identifies neurons activating across layers in the LLM. Image Credits: OpenAI

“Most of the explanations score quite poorly or don’t explain that much of the behavior of the actual neuron,” Wu said. “A lot of the neurons, for example, are active in a way where it’s very hard to tell what’s going on — like they activate on five or six different things, but there’s no discernible pattern. Sometimes there is a discernible pattern, but GPT-4 is unable to find it.”

That’s to say nothing of more complex, newer and larger models, or models that can browse the web for information. But on that second point, Wu believes that web browsing wouldn’t change the tool’s underlying mechanisms much. It could simply be tweaked, he says, to figure out why neurons decide to make certain search engine queries or access particular websites.

“We hope that this will open up a promising avenue to address interpretability in an automated way that others can build on and contribute to,” Wu said. “The hope is that we really actually have good explanations of not just what neurons are responding to but overall, the behavior of these models — what kinds of circuits they’re computing and how certain neurons affect other neurons.”



Source link

Previous Post

VentureLab president and CEO Melissa Chee steps down as tech hub looks for new leader

Next Post

Bitget Introduces the BRC20 Zone and Lists Ordinals (ORDI) – Press release Bitcoin News

Next Post

Bitget Introduces the BRC20 Zone and Lists Ordinals (ORDI) – Press release Bitcoin News

Hopper to power Uber’s expansion into flight bookings

Daily Crunch: OpenAI, Anthropic and Stability AI receive half of Sound Ventures' $240M AI fund

Bitcoin Miner Marathon Receives Second SEC Subpoena in Connection to Montana Mining Facility Investigation – Bitcoin News

Bringing Joy Into Your Marketing with M. Shannon Hernandez » Succeed As Your Own Boss

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

NASA, Boeing delay the first crewed flight test of the Starliner capsule…again

by admin
June 1, 2023

Boeing and NASA said Thursday that the first crewed flight test of the Starliner capsule would be further delayed due...

OSC advisory panel backs stronger regulation of crypto platforms

by admin
June 1, 2023

Report cautions OSC against fostering innovation at the expense of protecting investors. The advisory panel to the Ontario Securities Commission...

Hit it & Quit it with M. Shannon Hernandez, Curtis May, Phil Gerbyshak » Succeed As Your Own Boss

by admin
June 1, 2023

M. Shannon Hernandez is ALL ABOUT THAT JOY in life and biz. She is specifically known around the globe for...

Indel Money Announces Third Tranche Of Its Public Issue Worth Rs 100 Crore

by admin
June 1, 2023

You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Indel Money Limited, a non-banking finance company in the gold...

business-news-hubb-white

© Business News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Categories

  • Home
  • World
  • Business News
  • Markets
  • Startup
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • World
  • Business News
  • Markets
  • Startup
  • Contact Us

© 2022 Business News Hubb All rights reserved.