4

I would like to prove this formulation of the Dirichlet Energy for Graph Neural Networks $$ \begin{aligned} E(\mathbf{X}) &=\frac{1}{d_{i}} \sum_{j \in \mathcal{N}(i)} w_{i j}\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|^{2} \\ &=\operatorname{trace}\left(\mathbf{X} \Delta \mathbf{X}^{\mathrm{T}}\right) \end{aligned} $$


The background information for Graph Neural Networks in this case Where $\mathbf{X}$ is a matrix of stacked $\mathbf{x}_{i}$.

$$ \begin{aligned} &(\Delta \mathbf{x})_{i}=\frac{1}{d_{i}} \sum_{j \in \mathcal{N}(i)} w_{i j}\left(\mathbf{x}_{i}-\mathbf{x}_{j}\right)\\ &\text { (normalized) Laplacian matrix }\\ &\Delta=\mathbf{D}^{-1}(\mathbf{D}-\mathbf{W})=\mathbf{I}-\mathbf{D}^{-1} \mathbf{W}\\ &\text { Degree matrix } \mathbf{D}=\left[\begin{array}{lll} d_{1} & & \\ & \ddots & \\ & & d_{n} \end{array}\right] \end{aligned} $$ and $$d_{i} = \sum_{j \in \mathcal{N}(i)} w_{i j} $$


I'm having trouble, as I feel there should be an extra summation for i. I start off like $$\frac{1}{d_{i}} \sum_{j \in \mathcal{N}(i)} w_{i j}\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|^{2} \\$$ = $$\mathbf{x_{i}}^{T}\mathbf{x_{i}}-\dfrac{2}{d_{i}}\sum w_{ij}\mathbf{x_{i}}^{T}\mathbf{x_{j}}+\dfrac{1}{d_{i}}\sum w_{ij}\mathbf{x_{j}}^{T}\mathbf{x_{j}}$$ but don't know how to proceed,

JimSi
  • 531

0 Answers0