Memory AI: What Makes a Neural Network Remember?

Summary: Using a classic neural network, the researchers created a new artificial intelligence model based on recent biological findings that shows improved memory performance.

Source: OIST

Computer models are an important tool for studying how the brain makes and stores memories and other kinds of complex information. But creating such models is a tricky business. Somehow, a symphony of signals – both biochemical and electrical – and a tangle of connections between neurons and other types of cells create the hardware necessary for memories to take hold. Yet because neuroscientists don’t fully understand the underlying biology of the brain, coding the process into a computer model to study it further has been a challenge.

Now, researchers at the Okinawa Institute of Science and Technology (OIST) have modified a commonly used computer model of memory called the Hopfield network in a way to improve performance by taking inspiration from biology. They found that not only does the new network better reflect how neurons and other cells connect in the brain, but it can also hold many more memories.

The complexity added to the network is what makes it more realistic, says Thomas Burns, a PhD student in the group of Professor Tomoki Fukai, who heads the Neural Coding and Brain Computing unit at OIST. “Why should biology have all this complexity? Memory capacity could be a reason,” says Burns.

Hopfield networks store memories as patterns of weighted connections between different neurons in the system. The network is “trained” to encode these patterns, then researchers can test its memory by presenting a series of fuzzy or incomplete patterns and seeing if the network can recognize them as already known.

In classical Hopfield networks, however, neurons in the model reciprocally connect to other neurons in the network to form a series of so-called “pairwise” connections. Pairwise connections represent how two neurons connect at a synapse, a connection point between two neurons in the brain.

But in reality, neurons have complex branching structures called dendrites that provide multiple connection points, so the brain relies on a much more complex arrangement of synapses to do its cognitive work. Additionally, the connections between neurons are modulated by other types of cells called astrocytes.

“It’s just not realistic that only pairwise connections between neurons exist in the brain,” says Burns.

He created a modified Hopfield network in which not only pairs of neurons, but sets of three, four or more neurons could also connect, as might occur in the brain through astrocytes and dendritic trees. . Although the new network allows these so-called “per-set” connections, it basically contains the same total number of connections as before.

The researchers found that a network containing a mixture of pairwise and set connections worked best and retained the most memories. They estimate that it performs more than twice as well as a traditional Hopfield array.

“It turns out you actually need a combination of features in a certain balance,” Burns says. “You should have individual synapses, but you should also have dendritic trees and astrocytes.”

See also

It shows an alarm clock and a sleeping woman
This shows the outline of a head and a brain
They found that not only does the new network better reflect how neurons and other cells connect in the brain, but it can also hold many more memories. Image is in public domain

Hopfield arrays are important for modeling brain processes, but they also have other powerful uses. For example, very similar types of networks called Transformers underpin AI-based language tools like ChatGPT, so the improvements identified by Mr. Burns and Prof. Fukai may also make these tools more robust.

Mr. Burns and his colleagues plan to continue working with their modified Hopfield arrays to make them even more powerful. For example, in the brain, the strengths of connections between neurons are not normally the same in both directions, so Burns wonders if this feature of asymmetry could also improve network performance. Additionally, he would like to explore ways to make network memories interact with each other, as they do in the human brain.

“Our memories are many and vast,” says Burns. “We still have a lot to discover.”

About this AI research news

Author: Tomomi Okubo
Source: OIST
Contact: Tomomi Okubo – OIST
Picture: Image is in public domain

Original research: The results will be presented at the International Conference on Representations of Learning

Leave a Comment