Home AI News Exphormer: Scaling Graph Transformers with Sparse Expander Attention

Exphormer: Scaling Graph Transformers with Sparse Expander Attention

0
Exphormer: Scaling Graph Transformers with Sparse Expander Attention

Introducing Exphormer: A Scalable Approach to Sparse Transformers for Graphs

Ameya Velingker, Research Scientist, and Balaji Venkatachalam, Software Engineer, both work on AI at Google. They believe graphs, like social networks or molecular structures, are important in machine learning. They’re using something called graph neural networks (GNNs) to learn from them.

GNNs are based on a message-passing framework. But, a new approach called graph transformers might work better. These work by using “expander graphs.” Expander graphs are sparse, yet have good connectivity and properties that allow for faster mixing of random walks.

So, Ameya and Balaji made Exphormer, which builds on the use of expander graphs. It combines expander edges with an input graph and virtual nodes. This interaction graph has edges from the input graph, edges from an expander graph, and edges from every node to a small set of virtual nodes.

Exphormer achieves strong results and is now available on GitHub. It’s a big step in AI, making graph transformers work better on large datasets. Plus, Exphormer is as powerful as dense transformers.

What’s exciting is that Ameya and Balaji’s work could even help you in the future. It might be used in real-world applications in AI, where graphs are key for things like cybersecurity or predicting traffic flow.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here