From robotics to drug discovery and classifying works of literature, it seems graph neural networks are applicable to almost every field. After all, most of the data we interact with can be structured into some sort of graph -- processes requesting resources in an os, the computation graphs of neural networks, objects interacting in a scene, knowledge graphs relating words to one another. But why do we have a new neural network to process them? When we already have CNNs and other frameworks available to us.
CNNs are good for grid structured information-- they allow for a filter of a specific size to be slid over a fixed sized window. What do the convolutions afford a CNN? The sharing of features of the neighboring pixels and regions. How would this convolution act over a graph, that is how do you “slide” a filter over the nodes and edges of a graph?
Well one way to formulate this is by considering graph structures in the spectral domain.
First, our graph is the usual composition of vertices and edges: G=(V,E,W) except note that now we also have a weighted adjacency matrix.
For every edge that exists in the graph G, there will be a weight w_ij applied to it.
❓ What does it mean to construct a spectral perspective?
Instead of looking at all the vertices in the graph, for each vertex we will consider a signal x_i which will be a real number of n-dimensions. We then can have a vector ‘x,’ where x_i corresponds to the ith vertex
❓ The next question is naturally, how does this help? What tools do we now have? We can define the graph Laplacian....
❓ Why does this formulation make sense?
Intro to graphs: https://www.cis.upenn.edu/~cis515/cis515-14-graphlap.pdf