7/π§΅
In conclusion, MOTIFβs ability to integrate arbitrary motifs elevates KGFMs, achieving superior performance in practice! Our rigorous theoretical expressiveness study paves the way for designing even more advanced KGFMs (coming soon)! ππβ¨
Posts by Xingyue Huang
6/π§΅
Moreover, we plot the similarity matrices for different MOTIF instances and observe that richer motifs indeed yield more distinguishable relation embeddings, thus significantly boosting the link prediction task π
5/π§΅
Empirically, we conduct synthetic experiments to validate the hierarchy of expressive power of MOTIF!π
We show that with a simple addition of 3-ary patterns, there is a boost in zero-shot performance over 54 KGs! π
4/π§΅
Theoretically, we show that MOTIF contains a hierarchy of provably more expressive instances by adding additional (higher-order) motifs!
For example, MOTIF with 2-path motifs (e.g., ULTRA) cannot distinguish between rβ(u, vβ) and rβ(u, vβ), but when equipped with 3-path motifs, it can!
3/π§΅
We introduce a new framework MOTIF for KGFM: a general framework capable of integrating arbitrary graph motifs, capturing existing KGFMs such as ULTRA and InGram.
2/π§΅
Most existing KGFMs limit themselves to binary motifs (e.g., capturing interactions of two nodes), ignoring higher-order interactions among, e.g., three relations, leading to a loss of expressive power.
1/π§΅
π www.arxiv.org/abs/2502.13339
Pre-trained KGFMs predict missing links on any KGs with any new entities/relations! This is achieved by learning over shared patterns (aka motifs) across different types of relations. The choice of motifs defines modelβs expressivity.
Knowledge Graph Foundation Models (KGFMs) are at the frontier of graph learning - but we didnβt have a principled understanding of what we can (or canβt) do with them. Now we do! π‘ππ§΅
With Pablo Barcelo, Ismail Ceylan, @mmbronstein.bsky.social , @mgalkin.bsky.social, Juan Reutter, Miguel Romero!